993 resultados para Religions (Proposed, universal, etc.)
Resumo:
Curriculum demands continue to increase on school education systems with teachers at the forefront of implementing syllabus requirements. Education is reported frequently as a solution to most societal problems and, as a result of the world’s information explosion, teachers are expected to cover more and more within teaching programs. How can teachers combine subjects in order to capitalise on the competing educational agendas within school timeframes? Fusing curricula requires the bonding of standards from two or more syllabuses. Both technology and ICT complement the learning of science. This study analyses selected examples of preservice teachers’ overviews for fusing science, technology and ICT. These program overviews focused on primary students and the achievement of two standards (one from science and one from either technology or ICT). These primary preservice teachers’ fused-curricula overviews included scientific concepts and related technology and/or ICT skills and knowledge. Findings indicated a range of innovative curriculum plans for teaching primary science through technology and ICT, demonstrating that these subjects can form cohesive links towards achieving the respective learning standards. Teachers can work more astutely by fusing curricula; however further professional development may be required to advance thinking about these processes. Bonding subjects through their learning standards can extend beyond previous integration or thematic work where standards may not have been assessed. Education systems need to articulate through syllabus documents how effective fusing of curricula can be achieved. It appears that education is a key avenue for addressing societal needs, problems and issues. Education is promoted as a universal solution, which has resulted in curriculum overload (Dare, Durand, Moeller, & Washington, 1997; Vinson, 2001). Societal and curriculum demands have placed added pressure on teachers with many extenuating education issues increasing teachers’ workloads (Mobilise for Public Education, 2002). For example, as Australia has weather conducive for outdoor activities, social problems and issues arise that are reported through the media calling for action; consequently schools have been involved in swimming programs, road and bicycle safety programs, and a wide range of activities that had been considered a parental responsibility in the past. Teachers are expected to plan, implement and assess these extra-curricula activities within their already overcrowded timetables. At the same stage, key learning areas (KLAs) such as science and technology are mandatory requirements within all Australian education systems. These systems have syllabuses outlining levels of content and the anticipated learning outcomes (also known as standards, essential learnings, and frameworks). Time allocated for teaching science in obviously an issue. In 2001, it was estimated that on average the time spent in teaching science in Australian Primary Schools was almost an hour per week (Goodrum, Hackling, & Rennie, 2001). More recently, a study undertaken in the U.S. reported a similar finding. More than 80% of the teachers in K-5 classrooms spent less than an hour teaching science (Dorph, Goldstein, Lee, et al., 2007). More importantly, 16% did not spend teaching science in their classrooms. Teachers need to learn to work smarter by optimising the use of their in-class time. Integration is proposed as one of the ways to address the issue of curriculum overload (Venville & Dawson, 2005; Vogler, 2003). Even though there may be a lack of definition for integration (Hurley, 2001), curriculum integration aims at covering key concepts in two or more subject areas within the same lesson (Buxton & Whatley, 2002). This implies covering the curriculum in less time than if the subjects were taught separately; therefore teachers should have more time to cover other educational issues. Expectedly, the reality can be decidedly different (e.g., Brophy & Alleman, 1991; Venville & Dawson, 2005). Nevertheless, teachers report that students expand their knowledge and skills as a result of subject integration (James, Lamb, Householder, & Bailey, 2000). There seems to be considerable value for integrating science with other KLAs besides aiming to address teaching workloads. Over two decades ago, Cohen and Staley (1982) claimed that integration can bring a subject into the primary curriculum that may be otherwise left out. Integrating science education aims to develop a more holistic perspective. Indeed, life is not neat components of stand-alone subjects; life integrates subject content in numerous ways, and curriculum integration can assist students to make these real-life connections (Burnett & Wichman, 1997). Science integration can provide the scope for real-life learning and the possibility of targeting students’ learning styles more effectively by providing more than one perspective (Hudson & Hudson, 2001). To illustrate, technology is essential to science education (Blueford & Rosenbloom, 2003; Board of Studies, 1999; Penick, 2002), and constructing technology immediately evokes a social purpose for such construction (Marker, 1992). For example, building a model windmill requires science and technology (Zubrowski, 2002) but has a key focus on sustainability and the social sciences. Science has the potential to be integrated with all KLAs (e.g., Cohen & Staley, 1982; Dobbs, 1995; James et al., 2000). Yet, “integration” appears to be a confusing term. Integration has an educational meaning focused on special education students being assimilated into mainstream classrooms. The word integration was used in the late seventies and generally focused around thematic approaches for teaching. For instance, a science theme about flight only has to have a student drawing a picture of plane to show integration; it did not connect the anticipated outcomes from science and art. The term “fusing curricula” presents a seamless bonding between two subjects; hence standards (or outcomes) need to be linked from both subjects. This also goes beyond just embedding one subject within another. Embedding implies that one subject is dominant, while fusing curricula proposes an equal mix of learning within both subject areas. Primary education in Queensland has eight KLAs, each with its established content and each with a proposed structure for levels of learning. Primary teachers attempt to cover these syllabus requirements across the eight KLAs in less than five hours a day, and between many of the extra-curricula activities occurring throughout a school year (e.g., Easter activities, Education Week, concerts, excursions, performances). In Australia, education systems have developed standards for all KLAs (e.g., Education Queensland, NSW Department of Education and Training, Victorian Education) usually designated by a code. In the late 1990’s (in Queensland), “core learning outcomes” for strands across all KLA’s. For example, LL2.1 for the Queensland Education science syllabus means Life and Living at Level 2 standard number 1. Thus, a teacher’s planning requires the inclusion of standards as indicated by the presiding syllabus. More recently, the core learning outcomes were replaced by “essential learnings”. They specify “what students should be taught and what is important for students to have opportunities to know, understand and be able to do” (Queensland Studies Authority, 2009, para. 1). Fusing science education with other KLAs may facilitate more efficient use of time and resources; however this type of planning needs to combine standards from two syllabuses. To further assist in facilitating sound pedagogical practices, there are models proposed for learning science, technology and other KLAs such as Bloom’s Taxonomy (Bloom, 1956), Productive Pedagogies (Education Queensland, 2004), de Bono’s Six Hats (de Bono, 1985), and Gardner’s Multiple Intelligences (Gardner, 1999) that imply, warrant, or necessitate fused curricula. Bybee’s 5 Es, for example, has five levels of learning (engage, explore, explain, elaborate, and evaluate; Bybee, 1997) can have the potential for fusing science and ICT standards.
Resumo:
Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.
Resumo:
This paper reports on a unique approach to inclusive practice that responds to several critical issues highlighted in the first Universal Design (UD) Conference in Yokohama as well as in more recent literature on universal design. The approach, as explained in the paper, involves a not-for-profit community organization, university researchers, and a design action group comprising practitioners from across the design disciplines, together with other specialist consultants, developing a voluntary capacity an independent housing model for people with disabilities and their families or caretakers. With a focus on relationships and "a system that places human beings at the centre in all respects", this paper presents the approach and the ermerging theoretical framework which addresses three issues that afacan and Erbug (2009) propose hinder the integration of universal design with design practice, namely: theory-practice inconsistency involving the lack of flow-on of universal design awareness into design practice; a way of thinking that exhibits very little empathy with and understanding of the requirements of diverse users; and poorly implemented and coordinated collaboration and communication involving designers and other professionals (pp. 731 - 732).
Resumo:
This paper proposes a theoretical model for e-Government in Malaysia and addresses issues involved in its implementation. It presents three possible models including the Framework for Electronic Government (Grant & Chau, 2005), the Three Pillars Framework (Georgescu, 2007) and The Grid-Group Theory from cultural studies (Douglas, 1996) and integrates and adapts them to the specific needs of the Malaysian environment.
Resumo:
A new immobilized flat plate photocatalytic reactor for wastewater treatment has been proposed in this study to avoid subsequent catalyst removal from the treated water. The reactor consists of an inlet, reactive section where catalyst is coated and an outlet parts. In order to optimize the fluid mixing and reactor design, this study aims to investigate the influence of baffles and its arrangement on the flat plate reactor hydrodynamics using computational fluid dynamics (CFD) simulation. For simulation, an array of baffles acting as turbulence promoters is inserted in the reactive zone of the reactor. In this regard, results obtained from the simulation of a baffled- flat plate photoreactor hydrodynamics for different baffle positions, heights and intervals are presented utilizing RNG k-ε turbulence model. Under the conditions simulated, the qualitative flow features, such as the development and separation of boundary layers, vortex formation, the presence of high shear regions and recirculation zones, and the underlying mechanism are examined. The influence of various baffle sizes on the distribution of pollutant concentration is also highlighted. The results presented here indicate that the spanning of recirculation increases the degree of interfacial distortion with a larger interfacial area between fluids which results in substantial enhancement in fluid mixing. The simulation results suggest that the qualitative and quantitative properties of fluid dynamics in a baffled reactor can be obtained which provides valuable insight to fully understand the effect of baffles and its arrangements on the flow pattern, behaviour, and feature.
Resumo:
Bana et al. proposed the relation formal indistinguishability (FIR), i.e. an equivalence between two terms built from an abstract algebra. Later Ene et al. extended it to cover active adversaries and random oracles. This notion enables a framework to verify computational indistinguishability while still offering the simplicity and formality of symbolic methods. We are in the process of making an automated tool for checking FIR between two terms. First, we extend the work by Ene et al. further, by covering ordered sorts and simplifying the way to cope with random oracles. Second, we investigate the possibility of combining algebras together, since it makes the tool scalable and able to cover a wide class of cryptographic schemes. Specially, we show that the combined algebra is still computationally sound, as long as each algebra is sound. Third, we design some proving strategies and implement the tool. Basically, the strategies allow us to find a sequence of intermediate terms, which are formally indistinguishable, between two given terms. FIR between the two given terms is then guaranteed by the transitivity of FIR. Finally, we show applications of the work, e.g. on key exchanges and encryption schemes. In the future, the tool should be extended easily to cover many schemes. This work continues previous research of ours on use of compilers to aid in automated proofs for key exchange.
Resumo:
While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.
Resumo:
In team sports such as rugby union, a myriad of decisions and actions occur within the boundaries that compose the performance perceptual- motor workspace. The way that these performance boundaries constrain decision making and action has recently interested researchers and has involved developing an understanding of the concept of constraints. Considering team sports as complex dynamical systems, signifies that they are composed of multiple, independent agents (i.e. individual players) whose interactions are highly integrated. This level of complexity is characterized by the multiple ways that players in a rugby field can interact. It affords the emergence of rich patterns of behaviour, such as rucks, mauls, and collective tactical actions that emerge due to players’ adjustments to dynamically varying competition environments. During performance, the decisions and actions of each player are constrained by multiple causes (e.g. technical and tactical skills, emotional states, plans, thoughts, etc.) that generate multiple effects (e.g. to run or pass, to move forward to tackle or maintain position and drive the opponent to the line), a prime feature in a complex systems approach to team games performance (Bar- Yam, 2004). To establish a bridge between the complexity sciences and learning design in team sports like rugby union, the aim of practice sessions is to prepare players to pick up and explore the information available in the multiple constraints (i.e. the causes) that influence performance. Therefore, learning design in training sessions should be soundly based on the interactions amongst players (i.e.teammates and opponents) that will occur in rugby matches. To improve individual and collective decision making in rugby union, Passos and colleagues proposed in previous work a performer- environment interaction- based approach rather than a traditional performer- based approach (Passos, Araújo, Davids & Shuttleworth, 2008).
Resumo:
Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.
Resumo:
As the Service-oriented architecture paradigm has become ever more popular, different standardization efforts have been proposed by various consortia to enable interaction among heterogeneous environments through this paradigm. This chapter will overview the most prevalent of these SOA Efforts. It will first show how technical services can be described, how they can interact with each other and be discovered by users. Next, the chapter will present different standards to facilitate service composition and to design service-oriented environments in light of a universal understanding of service orientation. The chapter will conclude with a summary and a discussion on the limitations of the reviewed standards along their ability to describe service properties. This paves the way to the next chapters where the USDL standard will be presented, which aim to lift such limitations.
Resumo:
In many applications, e.g., bioinformatics, web access traces, system utilisation logs, etc., the data is naturally in the form of sequences. People have taken great interest in analysing the sequential data and finding the inherent characteristics or relationships within the data. Sequential association rule mining is one of the possible methods used to analyse this data. As conventional sequential association rule mining very often generates a huge number of association rules, of which many are redundant, it is desirable to find a solution to get rid of those unnecessary association rules. Because of the complexity and temporal ordered characteristics of sequential data, current research on sequential association rule mining is limited. Although several sequential association rule prediction models using either sequence constraints or temporal constraints have been proposed, none of them considered the redundancy problem in rule mining. The main contribution of this research is to propose a non-redundant association rule mining method based on closed frequent sequences and minimal sequential generators. We also give a definition for the non-redundant sequential rules, which are sequential rules with minimal antecedents but maximal consequents. A new algorithm called CSGM (closed sequential and generator mining) for generating closed sequences and minimal sequential generators is also introduced. A further experiment has been done to compare the performance of generating non-redundant sequential rules and full sequential rules, meanwhile, performance evaluation of our CSGM and other closed sequential pattern mining or generator mining algorithms has also been conducted. We also use generated non-redundant sequential rules for query expansion in order to improve recommendations for infrequently purchased products.
Resumo:
Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.