981 resultados para Critical Set
Resumo:
The use of certain perfonnance enhancing substances and methods has been defined as a major ethical breach by parties involved in the governance of highperfonnance sport. As a result, elite athletes worldwide are subject to rules and regulations set out in international and national anti-doping policies. Existing literature on the development of policies such as the World Anti-Doping Code and The Canadian antiDoping Program suggests a sport system in which athletes are rarely meaningfully involved in policy development (Houlihan, 2004a). Additionally, it is suggested that this lack of involvement is reflective of a similar lack of involvement in other areas of governance concerning athletes' lives. The purpose ofthis thesis is to examine the history and current state of athletes' involvement in the anti-doping policy process in Canada's high-perfonnance sport system. It includes discussion and analysis of recently conducted interviews with those involved in the policy process as well as an analysis of relevant documents, including anti-doping policies. The findings demonstrate that Canadian athletes have not been significantly involved in the creation of recently developed antidoping policies and that a re-evaluation of current policies is necessary to more fully recognize the reality of athletes' lives in Canada's high-perfonnance sport system and their rights within that system.
Resumo:
The intent in this study was to investigate in what ways teachers· beliefs about education and teaching are expressed in the specific teaching behaviours they employ, and whether teaching behaviours, as perceived by their students, are correlated with students· critical thinking and self-directed learning. To this end the relationships studied were: among faCUlty members· philosophy of teaching, locus of control orientation, psychological type, and observed teaching behaviour; and among students· psychological type, perceptions of teaching behaviour, self-directed learning readiness, and critical thinking. The overall purpose of the study was to investigate whether the implicit goals of higher education, critical thinking and self-direction, were actually accounted for in the university classroom. The research was set within the context of path-goal theory, adapted from the leadership literature. Within this framework, Mezirow·s work on transformative learning, including the influences of Habermas· writings, was integrated to develop a theoretical perspective upon which to base the research methodology. Both qualitative and quantitative methodologies were incorporated. Four faCUlty and a total of 142 students participated in the study. Philosophy of teaching was described through faCUlty interviews and completion of a repertory grid. Faculty completed a descriptive locus of control scale, and a psychological type test. Observations of their teaching behaviour were conducted. Students completed a Teaching Behaviour Assessment Scale, the Self-Directed Learning Readiness Scale, a psychological type test, and the Watson-Glaser Critical Thinking Appraisal. A small sample of students were interviewed. Follow-up discussions with faculty were used to validate the interview, observation, teaching behaviour, and repertory grid data. Results indicated that some discrepancies existed between faculty's espoused philosophy of teaching and their observed teaching behaviour. Instructors' teaching behaviour, however, was a function of their personal theory of practice. Relationships were found between perceived teaching behaviour and students· self-directed learning and critical thinking, but these varied across situations, as would be predicted from path-goal theory. Psychological type of students and instructor also accounted for some of the variability in the relationships studied. Student psychological type could be shown as a partial predictor of self-directed learning readiness. The results were discussed in terms of theory development and implications for further research and practice.
Resumo:
Employing critical pedagogy and transformative theory as a theoretical framework, I examined a learning process associated with building capacity in community-based organizations (CBOs) through an investigation of the Institutional Capacity Building Program (ICBP) initiated by a Foundation. The study sought to: (a) examine the importance of institutional capacity building for individual and community development; (b) investigate elements of a process associated with a program and characteristics of a learning process for building capacity in CBOs; and (c) analyze the Foundation’s approach to synthesizing, systematizing, and sharing learning. The study used a narrative research design that included 3 one-on-one, hour-long interviews with 2 women having unique vantage points in ICBP: one is a program facilitator working at the Foundation and the other runs a CBO supported by the Foundation. The interviews’ semistructured questions allowed interviewees to share stories regarding their experience with the learning process of ICB and enabled themes to emerge from their day-to-day experience. Through the analysis of this learning process for institutional capacity building, a few lessons can be drawn from the experience of the Foundation.
Resumo:
The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry, the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to the amalgamation in some cases. In this work we discuss the proper application of both concepts using a real data set corresponding to behavioral measures of pregnant sows
Resumo:
This paper develops some theoretical and methodological considerations for the development of a critical competence model (CCM). The model is defined as a set of skills and knowledge functionally organized allowing measurable results with positive consequences for the strategic business objectives. The theoretical approaches of classical model of competences, the contemporary model of competencies and human competencies model were revised for the proposal development. implementation of the model includes 5 steps: 1) conducting a job analysis considering which dimensions or facets are subject to revision, 2) identify people with the opposite performance (the higher performance and lower performance); 3) identify critical incidents most relevant to the job position, 4) develop behavioral expectation scales (bes) and 5) validate BES obtained for experts in the field. As a final consideration, is determined that the competence models require accurate measurement. Approaches considering excessive theoreticism may cause the issue of competence become a fashion business with low or minimal impact, affecting its validity, reliability and deployment in organizations.
Resumo:
Different procedures to obtain atom condensed Fukui functions are described. It is shown how the resulting values may differ depending on the exact approach to atom condensed Fukui functions. The condensed Fukui function can be computed using either the fragment of molecular response approach or the response of molecular fragment approach. The two approaches are nonequivalent; only the latter approach corresponds in general with a population difference expression. The Mulliken approach does not depend on the approach taken but has some computational drawbacks. The different resulting expressions are tested for a wide set of molecules. In practice one must make seemingly arbitrary choices about how to compute condensed Fukui functions, which suggests questioning the role of these indicators in conceptual density-functional theory
Resumo:
The effect of basis set superposition error (BSSE) on molecular complexes is analyzed. The BSSE causes artificial delocalizations which modify the first order electron density. The mechanism of this effect is assessed for the hydrogen fluoride dimer with several basis sets. The BSSE-corrected first-order electron density is obtained using the chemical Hamiltonian approach versions of the Roothaan and Kohn-Sham equations. The corrected densities are compared to uncorrected densities based on the charge density critical points. Contour difference maps between BSSE-corrected and uncorrected densities on the molecular plane are also plotted to gain insight into the effects of BSSE correction on the electron density
Resumo:
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed
Resumo:
The thesis which follows, entitled ''The Postoccidental Deconstruction and Resignification of 'Modemity': A Critical Analysis", is an exposition and criticism of the critique of occidental modemity found in a group of writings which identify their critique with a "postoccidental" point of view with respect to postcolonial studies. The general problem ofthe investigation concems the significance and reach ofthis critique of modemity in relation to the ongoing debate, in Latín American studies, about the historical relationship between Latín America, as a mu1ticultural/ structurally heterogeneous region, and the industrial societies of Euro pe and North America. A brief Preface explains the genealogy of the author's ideas on this subject Following this preface, the thesis proceeds to analyze the writings in this corpus through an intertextual, schematic approach which singles out two rnajor elements of the postoccidental critique: "coloniality" and "eurocentrism". These two main elements are investigated in the Introduction and Chapters One and Two, in terms of how they distinguish postoccidental analysis from other theoretical tendencias with which it has affinities but whose key concepts it reformu1ates in ways that are key to the unique approach which postoccidental analysis takes to modemity, the nature of the capitalist world system, colonialism, subaltemization, center/periphery and development . Chapter Three attempts a critical analysis of the foregoing postoccidentalist deconstruction according to the following question: to what extent does it succeed in deconstructing "modernity" as a term which refers to a historically articulated set of discourses whose underlying purpose has been to justify European and North American hegemony and structural asymmetries vis-a-vis the peripheries of the capitalist world system, based on an ethnocentric, racialist logic of exploitation and subalternization of non-European peoples? A Conclusion follows Chapter Three.
Resumo:
The theory of harmonic force constant refinement calculations is reviewed, and a general-purpose program for force constant and normal coordinate calculations is described. The program, called ASYM20. is available through Quantum Chemistry Program Exchange. It will work on molecules of any symmetry containing up to 20 atoms and will produce results on a series of isotopomers as desired. The vibrational secular equations are solved in either nonredundant valence internal coordinates or symmetry coordinates. As well as calculating the (harmonic) vibrational wavenumbers and normal coordinates, the program will calculate centrifugal distortion constants, Coriolis zeta constants, harmonic contributions to the α′s. root-mean-square amplitudes of vibration, and other quantities related to gas electron-diffraction studies and thermodynamic properties. The program will work in either a predict mode, in which it calculates results from an input force field, or in a refine mode, in which it refines an input force field by least squares to fit observed data on the quantities mentioned above. Predicate values of the force constants may be included in the data set for a least-squares refinement. The program is written in FORTRAN for use on a PC or a mainframe computer. Operation is mainly controlled by steering indices in the input data file, but some interactive control is also implemented.
Resumo:
This paper analyses the kind of reader constructed in the Lives and the response expected of that reader. It begins by attempting a typology of moralising in the Lives. Plutarch does sometimes make general 'gnomic' statements about right and wrong, and occasionally passes explicit judgement on a subject's behaviour. In addition, the language with which Plutarch describes character is inherently moralistic; and even when he does not pass explicit judgment, Plutarch can rely on a common set of notions about what makes behaviour virtuous or vicious. However, the application of any moral lessons is left to the reader's own judgement. Furthermore, Plutarch's use of multiple focalisations means that the reader is sometimes presented with varying ways of looking at the same individual or the same historical situation. In addition, many incidents or anecdotes are marked by 'multivalence': that is, they resist reduction to a single moral message or lesson. In such cases, the reader is encouraged to exercise his or her own critical faculties. Indeed, the prologues which precede many pairs of Lives and the synkriseis which follow them sometimes explicitly invite the reader's participation in the work of judging. The syncritic structure of the Parallel Lives also invites the reader's participation, as do the varying perspectives provided by a corpus of overlapping Lives. In fact, the presence of a critical, engaged reader is presupposed by the agonistic nature of much of Greek literature, and of several texts in the Moralia which stage opposing viewpoints or arguments. Plutarch himself argues for such a reader in his How the young man should listen to poems.
Resumo:
Hardcore, or long-term derelict and vacant brownfield sites which are often contaminated, form a significant proportion of brownfield land in many cities, not only in the UK but also in other countries. The recent economic recession has placed the economic viability of such sites in jeopardy. This paper compares the approaches for bringing back hardcore brownfield sites into use in England and Japan by focusing on ten case studies in Manchester and Osaka, using an `agency'-based frame- work. The findings are set in the context of (i) national brownfield and related policy agendas; (ii) recent trends in land and property markets in both England and Japan; and (iii) city-level comparisons of brownfields in Manchester and Osaka. The research, which was conducted during 2009 ^ 10, suggests that hardcore brownfield sites have been badly affected by the recent recession in both Manchester and Osaka. Despite this, not only is there evidence that hardcore sites have been successfully regenerated in both cities, but also that the critical success factors (CSFs) operating in bringing sites back into use share a large degree of commonality. These CSFs include the presence of strong potential markets, seeing the recession as an opportunity, long-term vision, strong branding, strong partnerships, integrated development, and getting infrastructure into place. Finally, the paper outlines the policy implications of the research.
Resumo:
The paper reviews the leading diagramming methods employed in system dynamics to communicate the contents of models. The main ideas and historical development of the field are first outlined. Two diagramming methods—causal loop diagrams (CLDs) and stock/flow diagrams (SFDs)—are then described and their advantages and limitations discussed. A set of broad research directions is then outlined. These concern: the abilities of different diagrams to communicate different ideas, the role that diagrams have in group model building, and the question of whether diagrams can be an adequate substitute for simulation modelling. The paper closes by suggesting that although diagrams alone are insufficient, they have many benefits. However, since these benefits have emerged only as ‘craft wisdom’, a more rigorous programme of research into the diagrams' respective attributes is called for.
Resumo:
Purpose – This paper seeks to summarise the main research findings from a detailed, qualitative set of structured interviews and case studies of Real Estate Partnership (REP) schemes in the UK, which involve the construction of built facilities. The research, which was funded by the Foundation for the Built Environment, examines the evolution of REPs in the UK and in Europe. The paper also aims to analyse best practice, critical factors for success, and lessons for the future. Design/methodology/approach – The research in this paper is based around ten semi-structured interviews conducted with senior representatives from corporate occupiers, property consultants, legal practices and REP service providers. Findings – The research in the paper demonstrates that REPs are particularly suited to the UK, where lease lengths are relatively long, and the level of corporate real estate owner-occupation is often higher than elsewhere. It also shows that further research is needed to examine the future shape and form of the UK REP market. Research limitations/implications – The paper is based on a limited number of in-depth case study interviews. The paper shows that further research is needed to find better ways to examine REPs empirically. Practical implications – The paper is important in highlighting a number of main issues in developing REPs: identifying with occupier's objectives; risk transfer and size of contract; and developing appropriate innovation and skills. Originality/value – The paper examines the drivers, barriers and critical success factors (at strategic and operational levels) for REPs in the UK in detail and will be of value to property managers, facilities managers, investors, financiers, and others involved in the REP process.
Resumo:
Purpose – This paper summarises the main research findings from a detailed, qualitative set of structured interviews and case studies of private finance initiative (PFI) schemes in the UK, which involve the construction of built facilities. The research, which was funded by the Foundation for the Built Environment, examines the emergence of PFI in the UK. Benefits and problems in the PFI process are investigated. Best practice, the key critical factors for success, and lessons for the future are also analysed. Design/methodology/approach – The research is based around 11 semi-structured interviews conducted with stakeholders in key PFI projects in the UK. Findings – The research demonstrates that value for money and risk transfer are key success criteria. High procurement and transaction costs are a feature of PFI projects, and the large-scale nature of PFI projects frequently acts as barrier to entry. Research limitations/implications – The research is based on a limited number of in-depth case study interviews. The paper also shows that further research is needed to find better ways to measure these concepts empirically. Practical implications – The paper is important in highlighting four main areas of practical improvement in the PFI process: value for money assessment; establishing end-user needs; developing competitive markets and developing appropriate skills in the public sector. Originality/value – The paper examines the drivers, barriers and critical success factors for PFI in the UK for the first time in detail and will be of value to property investors, financiers, and others involved in the PFI process.