921 resultados para Invariance Principle
Resumo:
Five Canadian high school Chemistry classes in one school, taught by three different teachers, studied the concepts of dynamic chemical equilibria and Le Chatelier’s Principle. Some students received traditional teacher-led explanations of the concept first and used an interactive scientific visualisation second, while others worked with the visualisation first and received the teacher-led explanation second. Students completed a test of their conceptual understanding of the relevant concepts prior to instruction, after the first instructional session and at the end of instruction. Data on students’ academic achievement (highest, middle or lowest third of the class on the mid-term exam) and gender were also collected to explore the relationship between these factors, conceptual development and instructional sequencing. Results show, within this context at least, that teaching sequence is not important in terms of students’ conceptual learning gains.
Resumo:
This article suggests that the issue of proportionality in anti-doping sanctions has been inconsistently dealt with by the Court of Arbitration for Sport (CAS). Given CAS’s pre-eminent role in interpreting and applying the World Anti-Doping Code under the anti-doping policies of its signatories, an inconsistent approach to the application of the proportionality principle will cause difficulties for domestic anti-doping tribunals seeking guidance as to the appropriateness of their doping sanctions.
Resumo:
This chapter explores the objectives, principle and methods of climate law. The United Nations Framework Convention on Climate Change (UNFCCC) lays the foundations of the international regime by setting out its ultimate objectives in Article 2, the key principles in Article 3, and the methods of the regime in Article 4. The ultimate objective of the regime – to avoid dangerous anthropogenic interference – is examined and assessments of the Intergovernmental Panel on Climate Change (IPCC) are considered when seeking to understand the definition of this concept. The international environmental principles of: state sovereignty and responsibility, preventative action, cooperation, sustainable development, precaution, polluter pays and common but differentiated responsibility are then examined and their incorporation within the international climate regime instruments evaluated. This is followed by an examination of the methods used by the mitigation and adaptation regimes in seeking to achieve the objective of the UNFCCC. Methods of the mitigation regime include: domestic implementation of policies, setting of standards and targets and allocation of rights, use of flexibility mechanisms, and reporting. While it is noted that methods of the adaptation regime are still evolving, the latter includes measures such as impact assessments, national adaptation plans and the provision of funding.
Resumo:
A Delay Tolerant Network (DTN) is one where nodes can be highly mobile, with long message delay times forming dynamic and fragmented networks. Traditional centralised network security is difficult to implement in such a network, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme usually requires human intervention. Our aim is to build and compare different de- centralised trust systems for implementation in autonomous DTN systems. In this paper, we utilise a key distribution model based on the Web of Trust principle, and employ a simple leverage of common friends trust system to establish initial trust in autonomous DTN’s. We compare this system with two other methods of autonomously establishing initial trust by introducing a malicious node and measuring the distribution of malicious and fake keys. Our results show that the new trust system not only mitigates the distribution of fake malicious keys by 40% at the end of the simulation, but it also improved key distribution between nodes. This paper contributes a comparison of three de-centralised trust systems that can be employed in autonomous DTN systems.
Resumo:
What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.
Resumo:
In this panel, we showcase approaches to teaching for creativity in disciplines of the Media, Entertainment and Creative Arts School and the School of Design within the Creative Industries Faculty (CIF) at QUT. The Faculty is enormously diverse, with 4,000 students enrolled across a total of 20 disciplines. Creativity is a unifying concept in CIF, both as a graduate attribute, and as a key pedagogic principle. We take as our point of departure the assertion that it is not sufficient to assume that students of tertiary courses in creative disciplines are ‘naturally’ creative. Rather, teachers in higher education must embrace their roles as facilitators of development and learning for the creative workforce, including working to build creative capacity (Howkins, 2009). In so doing, we move away from Renaissance notions of creativity as an individual genius, a disposition or attribute which cannot be learned, towards a 21st century conceptualisation of creativity as highly collaborative, rhizomatic, and able to be developed through educational experiences (see, for instance, Robinson, 2006; Craft; 2001; McWilliam & Dawson, 2008). It has always been important for practitioners of the arts and design to be creative. Under the national innovation agenda (Bradley et al, 2008) and creative industries policy (e.g., Department for Culture, Media and Sport, 2008; Office for the Arts, 2011), creativity has been identified as a key determinant of economic growth, and thus developing students’ creativity has now become core higher education business across all fields. Even within the arts and design, professionals are challenged to be creative in new ways, for new purposes, in different contexts, and using new digital tools and platforms. Teachers in creative disciplines may have much to offer to the rest of the higher education sector, in terms of designing and modelling innovative and best practice pedagogies for the development of student creative capability. Information and Communication Technologies such as mobile learning, game-based learning, collaborative online learning tools and immersive learning environments offer new avenues for creative learning, although analogue approaches may also have much to offer, and should not be discarded out of hand. Each panelist will present a case study of their own approach to teaching for creativity, and will address the following questions with respect to their case: 1. What conceptual view of creativity does the case reflect? 2. What pedagogical approaches are used, and why were these chosen? What are the roles of innovative learning approaches, including ICTs, if any? 3. How is creativity measured or assessed? How do students demonstrate creativity? We seek to identify commonalities and contrasts between and among the pedagogic case studies, and to answer the question: what can we learn about teaching creatively and teaching for creativity from CIF best practice?
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
It is generally accepted that the notion of inclusion derived or evolved from the practices of mainstreaming or integrating students with disabilities into regular schools. Halting the practice of segregating children with disabilities was a progressive social movement. The value of this achievement is not in dispute. However, our charter as scholars and cultural vigilantes (Slee & Allan, 2001) is to always look for how we can improve things; to avoid stasis and complacency we must continue to ask, how can we do it better? Thus, we must ask ourselves uncomfortable questions and develop a critical perspective that Foucault characterised as an ‘ethic of discomfort’ (Rabinow & Rose, 2003, p. xxvi) by following the Nietzscheian principle where one acts “counter to our time and thereby on our time… for the benefit of a time to come” (Nietzsche, 1874, p. 60 in Rabinow & Rose, 2003, p. xxvi). This paper begins with a fundamental question for those participating in inclusive education research and scholarship – when we talk of including, into what do we seek to include?
Resumo:
According to Karl Popper, widely regarded as one of the greatest philosophers of science in the 20th century, falsifiability is the primary characteristic that distinguishes scientific theories from ideologies – or dogma. For example, for people who argue that schools should treat creationism as a scientific theory, comparable to modern theories of evolution, advocates of creationism would need to become engaged in the generation of falsifiable hypothesis, and would need to abandon the practice of discouraging questioning and inquiry. Ironically, scientific theories themselves are accepted or rejected based on a principle that might be called survival of the fittest. So, for healthy theories on development to occur, four Darwinian functions should function: (a) variation – avoid orthodoxy and encourage divergent thinking, (b) selection – submit all assumptions and innovations to rigorous testing, (c) diffusion – encourage the shareability of new and/or viable ways of thinking, and (d) accumulation – encourage the reuseability of viable aspects of productive innovations.
Resumo:
This study seeks to answer the question of “why is policy innovation in Indonesia, in particular reformed state asset management laws and regulations, stagnant?” through an empirical and qualitative approach, identifying and exploring potential impeding influences to the full and equal implementation of said laws and regulations. The policies and regulations governing the practice of state asset management has emerged as an urgent question among many countries worldwide (Conway, 2006; Dow, Gillies, Nichols, & Polen, 2006; Kaganova, McKellar, & Peterson, 2006; McKellar, 2006b) for there is heightened awareness of the complex and crucial role that state assets play in public service provision. Indonesia is an example of such country, introducing a ‘big-bang’ reform in state asset management laws, policies, regulations, and technical guidelines. Two main reasons propelled said policy innovation: a) world-wide common challenges in state asset management practices - such as incomplete information system, accountability, and governance adherence/conceptualisation (Kaganova, McKellar and Peterson 2006); and b) unfavourable state assets audit results in all regional governments across Indonesia. The latter reasoning is emphasised, as the Indonesian government admits to past neglect in ensuring efficiency and best practice in its state asset management practices. Prior to reform there was euphoria of building and developing state assets and public infrastructure to support government programs of the day. Although this euphoria resulted in high growth within Indonesia, there seems to be little attention paid to how state assets bought/built is managed. Up until 2003-2004 state asset management is considered to be minimal; inventory of assets is done manually, there is incomplete public sector accounting standards, and incomplete financial reporting standards (Hadiyanto 2009). During that time transparency, accountability, and maintenance state assets was not the main focus, be it by the government or the society itself (Hadiyanto 2009). Indonesia exemplified its enthusiasm in reforming state asset management policies and practices through the establishment of the Directorate General of State Assets in 2006. The Directorate General of State Assets have stressed the new direction that it is taking state asset management laws and policies through the introduction of Republic of Indonesia Law Number 38 Year 2008, which is an amended regulation overruling Republic of Indonesia Law Number 6 Year 2006 on Central/Regional Government State Asset Management (Hadiyanto, 2009c). Law number 38/2008 aims to further exemplify good governance principles and puts forward a ‘the highest and best use of assets’ principle in state asset management (Hadiyanto, 2009a). The methodology of this study is that of qualitative case study approach, with a triangulated data collection method of document analysis (all relevant state asset management laws, regulations, policies, technical guidelines, and external audit reports), semi-structured interviews, and on-site observation. Empirical data of this study involved a sample of four Indonesian regional governments and 70 interviews, performed during January-July 2010. The analytical approach of this study is that of thematic analysis, in an effort to identify common influences and/or challenges to policy innovation within Indonesia. Based on the empirical data of this study specific impeding influences to state asset management reform is explored, answering the question why innovative policy implementation is stagnant. An in-depth analysis of each influencing factors to state asset management reform, and the attached interviewee’s opinions for each factor, suggests the potential of an ‘excuse rhetoric’; whereby the influencing factors identified are a smoke-screen, or are myths that public policy makers and implementers believe in; as a means to explain innovative policy stagnancy. This study offers insights to Indonesian policy makers interested in ensuring the conceptualisation and full implementation of innovative policies, particularly, although not limited to, within the context of state asset management practices.
Resumo:
In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).
Resumo:
Formation of Reduced Emissions from Deforestation and Degradation (REDD+) policy within the international climate regime has raised a number of discussions about ‘justice’. REDD+ aims to provide an incentive for developing countries to preserve or increase the amount of carbon stored in their forested areas. Governance of REDD+ is multi-layered: at the international level, a guiding framework must be determined; at the national level, strong legal frameworks are a pre-requisite to ensure both public and private investor confidence and at the sub-national level, forest-dependent peoples need to agree to participate as stewards of forest carbon project areas. At the international level the overall objective of REDD+ is yet to be determined, with competing mitigation, biological and justice agendas. Existing international law pertaining to the environment (international environmental principles and law, IEL) and human rights (international human rights law, IHRL) should inform the development of international and national REDD+ policy especially in relation to ensuring the environmental integrity of projects and participation and benefit-sharing rights for forest dependent communities. National laws applicable to REDD+ must accommodate the needs of all stakeholders and articulate boundaries which define their interactions, paying particular attention to ensuring that vulnerable groups are protected. This paper i) examines justice theories and IEL and IHRL to inform our understanding of what ‘justice’ means in the context of REDD+, and ii) applies international law to create a reference tool for policy-makers dealing with the complex sub-debates within this emerging climate policy. We achieve this by: 1) Briefly outlining theories of justice (for example – perspectives offered by anthropogenic and ecocentric approaches, and views from ‘green economics’). 2) Commenting on what ‘climate justice’ means in the context of REDD+. 3) Outlining a selection of IEL and IHRL principles and laws to inform our understanding of ‘justice’ in this policy realm (for example – common but differentiated responsibilities, the precautionary principle, sovereignty and prevention drawn from the principles of IEL, the UNFCCC and CBD as relevant conventions of international environmental law; and UNDRIP and the Declaration on the Right to Development as applicable international human rights instruments) 4) Noting how this informs what ‘justice’ is for different REDD+ stakeholders 5) Considering how current law-making (at both the international and national levels) reflects these principles and rules drawn from international law 6) Presenting how international law can inform policy-making by providing a reference tool of applicable international law and how it could be applied to different issues linked to REDD+. As such, this paper will help scholars and policy-makers to understand how international law can assist us to both conceptualise and embody ‘justice’ within frameworks for REDD+ at both the international and national levels.
Creating 'saviour siblings' : the notion of harming by conceiving in the context of healthy children
Resumo:
Over the past decade there have been a number of families who have utilised assisted reproductive technologies (ARTs) to create a tissue-matched child, with the purpose of using the child’s tissue to cure an existing sick child. This inevitably brings such families a sense of hope as the ultimate aim is to overcome a family health crisis. However, this specific use of reproductive technologies has been the subject of significant criticism, most of which is levelled against the potential harm to the ‘saviour’ child. In Australia, families seeking to access reproductive technologies in this context are therefore required to justify their motives to an ethics committee in order to establish, amongst other things, whether the child will suffer harm once born. This paper explores the concept of harm in the context of conception, focusing on whether it is possible to ‘harm’ a healthy child who has been conceived to save another. To achieve this, the paper will evaluate the impact of the ‘non-identity’ principle in the ‘saviour sibling’ context, and assess the existing body of literature which addresses ‘harm’ in the context of conception. As will be established, the majority of such literature has focused on ‘wrongful life’ cases which seek to address whether an existing child who has been born with a disability, has been harmed. Finally, this paper will distinguish the harm arguments in the ‘saviour sibling’ context based on the fact that the harm evaluation concerns the ‘future-life’ assessment of a healthy child.
Resumo:
Recent literature has argued that environmental efficiency (EE), which is built on the materials balance (MB) principle, is more suitable than other EE measures in situations where the law of mass conversation regulates production processes. In addition, the MB-based EE method is particularly useful in analysing possible trade-offs between cost and environmental performance. Identifying determinants of MB-based EE can provide useful information to decision makers but there are very few empirical investigations into this issue. This article proposes the use of data envelopment analysis and stochastic frontier analysis techniques to analyse variation in MB-based EE. Specifically, the article develops a stochastic nutrient frontier and nutrient inefficiency model to analyse determinants of MB-based EE. The empirical study applies both techniques to investigate MB-based EE of 96 rice farms in South Korea. The size of land, fertiliser consumption intensity, cost allocative efficiency, and the share of owned land out of total land are found to be correlated with MB-based EE. The results confirm the presence of a trade-off between MB-based EE and cost allocative efficiency and this finding, favouring policy interventions to help farms simultaneously achieve cost efficiency and MP-based EE.
Resumo:
Recent literature has argued that environmental efficiency (EE), which is built on the materials balance (MB) principle, is more suitable than other EE measures in situations where the law of mass conversation regulates production processes. In addition, the MB-based EE method is particularly useful in analysing possible trade-offs between cost and environmental performance. Identifying determinants of MB-based EE can provide useful information to decision makers but there are very few empirical investigations into this issue. This article proposes the use of data envelopment analysis and stochastic frontier analysis techniques to analyse variation in MB-based EE. Specifically, the article develops a stochastic nutrient frontier and nutrient inefficiency model to analyse determinants of MB-based EE. The empirical study applies both techniques to investigate MB-based EE of 96 rice farms in South Korea. The size of land, fertiliser consumption intensity, cost allocative efficiency, and the share of owned land out of total land are found to be correlated with MB-based EE. The results confirm the presence of a trade-off between MB-based EE and cost allocative efficiency and this finding, favouring policy interventions to help farms simultaneously achieve cost efficiency and MP-based EE.