492 resultados para scalar scattering theory
Resumo:
Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organization of learning environments and effective and efficient use of practice time. An emerging theoretical framework in motor learning, relevant to physical education, advocates a constraints-led perspective for acquisition of movement skills and game play knowledge. This framework shows how physical educators could use task, performer and environmental constraints to channel acquisition of movement skills and decision making behaviours in learners. From this viewpoint, learners generate specific movement solutions to satisfy the unique combination of constraints imposed on them, a process which can be harnessed during physical education lessons. Purpose: In this paper the aim is to provide an overview of the motor learning approach emanating from the constraints-led perspective, and examine how it can substantiate a platform for a new pedagogical framework in physical education: nonlinear pedagogy. We aim to demonstrate that it is only through theoretically valid and objective empirical work of an applied nature that a conceptually sound nonlinear pedagogy model can continue to evolve and support research in physical education. We present some important implications for designing practices in games lessons, showing how a constraints-led perspective on motor learning could assist physical educators in understanding how to structure learning experiences for learners at different stages, with specific focus on understanding the design of games teaching programmes in physical education, using exemplars from Rugby Union and Cricket. Findings: Research evidence from recent studies examining movement models demonstrates that physical education teachers need a strong understanding of sport performance so that task constraints can be manipulated so that information-movement couplings are maintained in a learning environment that is representative of real performance situations. Physical educators should also understand that movement variability may not necessarily be detrimental to learning and could be an important phenomenon prior to the acquisition of a stable and functional movement pattern. We highlight how the nonlinear pedagogical approach is student-centred and empowers individuals to become active learners via a more hands-off approach to learning. Summary: A constraints-based perspective has the potential to provide physical educators with a framework for understanding how performer, task and environmental constraints shape each individual‟s physical education. Understanding the underlying neurobiological processes present in a constraints-led perspective to skill acquisition and game play can raise awareness of physical educators that teaching is a dynamic 'art' interwoven with the 'science' of motor learning theories.
Resumo:
The increase of life expectancy worldwide during the last three decades has increased age-related disability leading to the risk of loss of quality of life. How to improve quality of life including physical health and mental health for older people and optimize their life potential has become an important health issue. This study used the Theory of Planned Behaviour Model to examine factors influencing health behaviours, and the relationship with quality of life. A cross-sectional mailed survey of 1300 Australians over 50 years was conducted at the beginning of 2009, with 730 completed questionnaires returned (response rate 63%). Preliminary analysis reveals that physiological changes of old age, especially increasing waist circumference and co morbidity was closely related to health status, especially worse physical health summary score. Physical activity was the least adherent behaviour among the respondents compared to eating healthy food and taking medication regularly as prescribed. Increasing number of older people living alone with co morbidity of disease may be the barriers that influence their attitude and self control toward physical activity. A multidisciplinary and integrated approach including hospital and non hospital care is required to provide appropriate services and facilities toward older people.
Resumo:
This paper reports on the opportunities for transformational learning experienced by a group of pre-service teachers who were engaged in service-learning as a pedagogical process with a focus on reflection. Critical social theory informed the design of the reflection process as it enabled a move away from knowledge transmission toward knowledge transformation. The structured reflection log was designed to illustrate the critical social theory expectations of quality learning that teach students to think critically: ideology critique and utopian critique. Butin's lenses and a reflection framework informed by the work of Bain, Ballantyne, Mills and Lester were used in the design of the service-learning reflection log. Reported data provide evidence of transformational learning and highlight how the students critique their world and imagine how they could contribute to a better world in their work as a beginning teacher.
Resumo:
This paper focuses on the varying approaches and methodologies adopted when the calculation of holding costs is undertaken, focusing on greenfield development. Whilst acknowledging there may be some consistency in embracing first principles relating to holding cost theory, a review of the literature reveals considerable lack of uniformity in this regard. There is even less clarity in quantitative determination, especially in Australia where there has been only limited empirical analysis undertaken. Despite a growing quantum of research undertaken in relation to various elements connected with housing affordability, the matter of holding costs has not been well addressed regardless of its part in the highly prioritised Australian Government’s housing research agenda. The end result has been a modicum of qualitative commentary relating to holding costs. There have been few attempts at finer-tuned analysis that exposes a quantified level of holding cost calculated with underlying rigour. Holding costs can take many forms, but they inevitably involve the computation of “carrying costs” of an initial outlay that has yet to fully realise its ultimate yield. Although sometimes considered a “hidden” cost, it is submitted that holding costs prospectively represent a major determinate of value. If this is the case, then considered in the context of housing affordability, it is therefore potentially pervasive.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.
Resumo:
This thesis locates the origins of modern secular knowledge in late medieval theology. Problems with modern and postmodern knowledge which arise from these theological origins are then tackled theologically, and the manner in which secular ways of understanding knowledge are embedded in specific university, political and hospital contexts are then described and evaluated from a post-secular theological standpoint. The theoretical component of this thesis looks at knowledge itself and finds that without faith there can be no knowledge. The applied component of this thesis does two things. Firstly it explores how our conception of knowledge shapes the assumptions, operational norms, belief frames and tacit values of some characteristically modern and secular institutions. Secondly the applied component evaluates those contexts from the theologically premised conception of knowledge which was argued for in the theoretical component of this thesis.
Resumo:
Up front I am impelled to acknowledge an intellectual debt to Raewyn Connell as one of my PhD supervisors about 20 years ago and as having a lasting influence on my own sociological approach to research. One of key themes of this book is that southern theorists are rarely read in the northern hemisphere. This is not the case for Connell, however, one of Australia’s most internationally renowned scholars. The tome reads as the creative outpouring of her lifelong thirst for social science. Its main claim is that southern theory ‘has as much intellectual power as metropolitan social thought, and more political relevance’ (p. xii). A big but compelling claim, as I will explain.
Resumo:
This paper explains, somewhat along a Simmelian line, that political theory may produce practical and universal theories like those developed in theoretical physics. The reasoning behind this paper is to show that the Element of Democracy Theory may be true by way of comparing it to Einstein’s Special Relativity – specifically concerning the parameters of symmetry, unification, simplicity, and utility. These parameters are what make a theory in physics as meeting them not only fits with current knowledge, but also produces paths towards testing (application). As the Element of Democracy Theory meets these same parameters, it could settle the debate concerning the definition of democracy. This will be shown firstly by discussing why no one has yet achieved a universal definition of democracy; secondly by explaining the parameters chosen (as in why these and not others confirm or scuttle theories); and thirdly by comparing how Special Relativity and the Element of Democracy match the parameters.
Resumo:
Objectives: To explore whether people's organ donation consent decisions occur via a reasoned and/or social reaction pathway. --------- Design: We examined prospectively students' and community members' decisions to register consent on a donor register and discuss organ donation wishes with family. --------- Method: Participants completed items assessing theory of planned behaviour (TPB; attitude, subjective norm, perceived behavioural control (PBC)), prototype/willingness model (PWM; donor prototype favourability/similarity, past behaviour), and proposed additional influences (moral norm, self-identity, recipient prototypes) for registering (N=339) and discussing (N=315) intentions/willingness. Participants self-reported their registering (N=177) and discussing (N=166) behaviour 1 month later. The utility of the (1) TPB, (2) PWM, (3) augmented TPB with PWM, and (4) augmented TPB with PWM and extensions was tested using structural equation modelling for registering and discussing intentions/willingness, and logistic regression for behaviour. --------- Results: While the TPB proved a more parsimonious model, fit indices suggested that the other proposed models offered viable options, explaining greater variance in communication intentions/willingness. The TPB, augmented TPB with PWM, and extended augmented TPB with PWM best explained registering and discussing decisions. The proposed and revised PWM also proved an adequate fit for discussing decisions. Respondents with stronger intentions (and PBC for registering) had a higher likelihood of registering and discussing. --------- Conclusions: People's decisions to communicate donation wishes may be better explained via a reasoned pathway (especially for registering); however, discussing involves more reactive elements. The role of moral norm, self-identity, and prototypes as influences predicting communication decisions were highlighted also.
Resumo:
A persistent question in the development of models for macroeconomic policy analysis has been the relative role of economic theory and evidence in their construction. This paper looks at some popular strategies that involve setting up a theoretical or conceptual model (CM) which is transformed to match the data and then made operational for policy analysis. A dynamic general equilibrium model is constructed that is similar to standard CMs. After calibration to UK data it is used to examine the utility of formal econometric methods in assessing the match of the CM to the data and also to evaluate some standard model-building strategies. Keywords: Policy oriented economic modeling; Model evaluation; VAR models
Resumo:
Differential axial shortening in vertical members of reinforced concrete high-rise buildings occurs due to shrinkage, creep and elastic shortening, which are time dependent effects of concrete. This has to be quantified in order to make adequate provisions and mitigate its adverse effects. This paper presents a novel procedure for quantifying the axial shortening of vertical members using the variations in vibration characteristics of the structure, in lieu of using gauges which can pose problems in use during and after the construction. This procedure is based on the changes in the modal flexiblity matrix which is expressed as a function of the mode shapes and the reciprocal of the natural frequencies. This paper will present the development of this novel procedure.
Resumo:
This paper explains, somewhat along a Simmelian line, that political theory may produce practical and universal theories like those developed in theoretical physics. The reasoning behind this paper is to show that the Element of Democracy Theory may be true by way of comparing it to Einstein’s Special Relativity – specifically concerning the parameters of symmetry, unification, simplicity, and utility. These parameters are what make a theory in physics as meeting them not only fits with current knowledge, but also produces paths towards testing (application). As the Element of Democracy Theory meets these same parameters, it could settle the debate concerning the definition of democracy. This will be shown firstly by discussing why no one has yet achieved a universal definition of democracy; secondly by explaining the parameters chosen (as in why these and not others confirm or scuttle theories); and thirdly by comparing how Special Relativity and the Element of Democracy match the parameters.
Resumo:
This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.
Resumo:
Theory-of-Mind has been defined as the ability to explain and predict human behaviour by imputing mental states, such as attention, intention, desire, emotion, perception and belief, to the self and others (Astington & Barriault, 2001). Theory-of-Mind study began with Piaget and continued through a tradition of meta-cognitive research projects (Flavell, 2004). A study by Baron-Cohen, Leslie and Frith (1985) of Theory-of-Mind abilities in atypically developing children reported major difficulties experienced by children with autism spectrum disorder (ASD) in imputing mental states to others. Since then, a wide range of follow-up research has been conducted to confirm these results. Traditional Theory-of-Mind research on ASD has been based on an either-or assumption that Theory-of-Mind is something one either possesses or does not. However, this approach fails to take account of how the ASD population themselves experience Theory-of-Mind. This paper suggests an alternative approach, Theory-of-Mind continuum model, to understand the Theory-of-Mind experience of people with ASD. The Theory-of-Mind continuum model will be developed through a comparison of subjective and objective aspects of mind, and phenomenal and psychological concepts of mind. This paper will demonstrate the importance of balancing qualitative and quantitative research methods in investigating the minds of people with ASD. It will enrich our theoretical understanding of Theory-of-Mind, as well as contain methodological implications for further studies in Theory-of-Mind
Resumo:
Over recent years, Unmanned Air Vehicles or UAVs have become a powerful tool for reconnaissance and surveillance tasks. These vehicles are now available in a broad size and capability range and are intended to fly in regions where the presence of onboard human pilots is either too risky or unnecessary. This paper describes the formulation and application of a design framework that supports the complex task of multidisciplinary design optimisation of UAVs systems via evolutionary computation. The framework includes a Graphical User Interface (GUI), a robust Evolutionary Algorithm optimiser named HAPEA, several design modules, mesh generators and post-processing capabilities in an integrated platform. These population –based algorithms such as EAs are good for cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise, and also problems where we look for multiple solutions via Game Theory, namely a Nash equilibrium point or a Pareto set of non-dominated solutions. The application of the methodology is illustrated on conceptual and detailed multi-criteria and multidisciplinary shape design problems. Results indicate the practicality and robustness of the framework to find optimal shapes and trade—offs between the disciplinary analyses and to produce a set of non dominated solutions of an optimal Pareto front to the designer.