239 resultados para Compact embedding
Resumo:
Queensland University of Technology (QUT) is a large multidisciplinary university located in Brisbane, Queensland, Australia. QUT is increasing its research focus and is developing its research support services. It has adopted a model of collaboration between the Library, High Performance Computing and Research Support (HPC) and more broadly with Information Technology Services (ITS). Research support services provided by the Library include the provision of information resources and discovery services, bibliographic management software, assistance with publishing (publishing strategies, identifying high impact journals, dealing with publishers and the peer review process), citation analysis and calculating authors’ H Index. Research data management services are being developed by the Library and HPC working in collaboration. The HPC group within ITS supports research computing infrastructure, research development and engagement activities, researcher consultation, high speed computation and data storage systems , 2D/ 3D (immersive) visualisation tools, parallelisation and optimization of research codes, statistics/ data modeling training and support (both qualitative and quantitative) and support for the university’s central Access Grid collaboration facility. Development and engagement activities include participation in research grants and papers, student supervision and internships and the sponsorship, incubation and adoption of new computing technologies for research. ITS also provides other services that support research including ICT training, research infrastructure (networking, data storage, federated access and authorization, virtualization) and corporate systems for research administration. Seminars and workshops are offered to increase awareness and uptake of new and existing services. A series of online surveys on eResearch practices and skills and a number of focus groups was conducted to better inform the development of research support services. Progress towards the provision of research support is described within the context organizational frameworks; resourcing; infrastructure; integration; collaboration; change management; engagement; awareness and skills; new services; and leadership. Challenges to be addressed include the need to redeploy existing operational resources toward new research support services, supporting a rapidly growing research profile across the university, the growing need for the use and support of IT in research programs, finding capacity to address the diverse research support needs across the disciplines, operationalising new research support services following their implementation in project mode, embedding new specialist staff roles, cross-skilling Liaison Librarians, and ensuring continued collaboration between stakeholders.
Resumo:
Cold-formed steel members are extensively used in the building construction industry, especially in residential, commercial and industrial buildings. In recent times, fire safety has become important in structural design due to increased fire damage to properties and loss of lives. However, past research into the fire performance of cold-formed steel members has been limited, and was confined to compression members. Therefore a research project was undertaken to investigate the structural behaviour of compact cold-formed steel lipped channel beams subject to inelastic local buckling and yielding, and lateral-torsional buckling effects under simulated fire conditions and associated section and member moment capacities. In the first phase of this research, an experimental study based on tensile coupon tests was undertaken to obtain the mechanical properties of elastic modulus and yield strength and the stress-strain relationship of cold-formed steels at uniform ambient and elevated temperatures up to 700oC. The mechanical properties deteriorated with increasing temperature and are likely to reduce the strength of cold-formed beams under fire conditions. Predictive equations were developed for yield strength and elastic modulus reduction factors while a modification was proposed for the stressstrain model at elevated temperatures. These results were used in the numerical modelling phases investigating the section and member moment capacities. The second phase of this research involved the development and validation of two finite element models to simulate the behaviour of compact cold-formed steel lipped channel beams subject to local buckling and yielding, and lateral-torsional buckling effects. Both models were first validated for elastic buckling. Lateral-torsional buckling tests of compact lipped channel beams were conducted at ambient temperature in order to validate the finite element model in predicting the non-linear ultimate strength behaviour. The results from this experimental study did not agree well with those from the developed experimental finite element model due to some unavoidable problems with testing. However, it highlighted the importance of magnitude and direction of initial geometric imperfection as well as the failure direction, and thus led to further enhancement of the finite element model. The finite element model for lateral-torsional buckling was then validated using the available experimental and numerical ultimate moment capacity results from past research. The third phase based on the validated finite element models included detailed parametric studies of section and member moment capacities of compact lipped channel beams at ambient temperature, and provided the basis for similar studies at elevated temperatures. The results showed the existence of inelastic reserve capacity for compact cold-formed steel beams at ambient temperature. However, full plastic capacity was not achieved by the mono-symmetric cold-formed steel beams. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity. Comparison of member capacity results from finite element analyses with current design rules showed that they do not give accurate predictions of lateral-torsional buckling capacities at ambient temperature and hence new design rules were developed. The fourth phase of this research investigated the section and member moment capacities of compact lipped channel beams at uniform elevated temperatures based on detailed parametric studies using the validated finite element models. The results showed the existence of inelastic reserve capacity at elevated temperatures. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity in fire design codes, ambient temperature design codes as well as those proposed by other researchers. The results showed that lateral-torsional buckling capacities are dependent on the ratio of yield strength and elasticity modulus reduction factors and the level of non-linearity in the stress-strain curves at elevated temperatures in addition to the temperature. Current design rules do not include the effects of non-linear stress-strain relationship and therefore their predictions were found to be inaccurate. Therefore a new design rule that uses a nonlinearity factor, which is defined as the ratio of the limit of proportionality to the yield stress at a given temperature, was developed for cold-formed steel beams subject to lateral-torsional buckling at elevated temperatures. This thesis presents the details and results of the experimental and numerical studies conducted in this research including a comparison of results with predictions using available design rules. It also presents the recommendations made regarding the accuracy of current design rules as well as the new developed design rules for coldformed steel beams both at ambient and elevated temperatures.
Resumo:
Unified Enterprise application security is a new emerging approach for providing protection against application level attacks. Conventional application security approach that consists of embedding security into each critical application leads towards scattered security mechanism that is not only difficult to manage but also creates security loopholes. According to the CSIIFBI computer crime survey report, almost 80% of the security breaches come from authorized users. In this paper, we have worked on the concept of unified security model, which manages all security aspect from a single security window. The basic idea is to keep business functionality separate from security components of the application. Our main focus was on the designing of frame work for unified layer which supports single point of policy control, centralize logging mechanism, granular, context aware access control, and independent from any underlying authentication technology and authorization policy.
Resumo:
This technical report is concerned with one aspect of environmental monitoring—the detection and analysis of acoustic events in sound recordings of the environment. Sound recordings offer ecologists the potential advantages of cheaper and increased sampling. An acoustic event detection algorithm is introduced that outputs a compact rectangular marquee description of each event. It can disentangle superimposed events, which are a common occurrence during morning and evening choruses. Next, three uses to which acoustic event detection can be put are illustrated. These tasks have been selected because they illustrate quite different modes of analysis: (1) the detection of diffuse events caused by wind and rain, which are a frequent contaminant of recordings of the terrestrial environment; (2) the detection of bird calls using the spatial distribution of their component events; and (3) the preparation of acoustic maps for whole ecosystem analysis. This last task utilises the temporal distribution of events over a daily, monthly or yearly cycle.
Resumo:
This technical report is concerned with one aspect of environmental monitoring—the detection and analysis of acoustic events in sound recordings of the environment. Sound recordings offer ecologists the potential advantages of cheaper and increased sampling. An acoustic event detection algorithm is introduced that outputs a compact rectangular marquee description of each event. It can disentangle superimposed events, which are a common occurrence during morning and evening choruses. Next, three uses to which acoustic event detection can be put are illustrated. These tasks have been selected because they illustrate quite different modes of analysis: (1) the detection of diffuse events caused by wind and rain, which are a frequent contaminant of recordings of the terrestrial environment; (2) the detection of bird calls using the spatial distribution of their component events; and (3) the preparation of acoustic maps for whole ecosystem analysis. This last task utilises the temporal distribution of events over a daily, monthly or yearly cycle.
Resumo:
The most costly operations encountered in pairing computations are those that take place in the full extension field Fpk . At high levels of security, the complexity of operations in Fpk dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller’s algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller’s algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller’s algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
The increase of buyer-driven supply chains, outsourcing and other forms of non-traditional employment has resulted in challenges for labour market regulation. One business model which has created substantial regulatory challenges is supply chains. The supply chain model involves retailers purchasing products from brand corporations who then outsource the manufacturing of the work to traders who contract with factories or outworkers who actually manufacture the clothing and textiles. This business model results in time and cost pressures being pushed down the supply chain which has resulted in sweatshops where workers systematically have their labour rights violated. Literally millions of workers work in dangerous workplaces where thousands are killed or permanently disabled every year. This thesis has analysed possible regulatory responses to provide workers a right to safety and health in supply chains which provide products for Australian retailers. This thesis will use a human rights standard to determine whether Australia is discharging its human rights obligations in its approach to combating domestic and foreign labour abuses. It is beyond this thesis to analyse Occupational Health and Safety (OHS) laws in every jurisdiction. Accordingly, this thesis will focus upon Australian domestic laws and laws in one of Australia’s major trading partners, the Peoples’ Republic of China (China). It is hypothesised that Australia is currently breaching its human rights obligations through failing to adequately regulate employees’ safety at work in Australian-based supply chains. To prove this hypothesis, this thesis will adopt a three- phase approach to analysing Australia’s regulatory responses. Phase 1 will identify the standard by which Australia’s regulatory approach to employees’ health and safety in supply chains can be judged. This phase will focus on analysing how workers’ rights to safety as a human right imposes a moral obligation on Australia to take reasonablely practicable steps regulate Australian-based supply chains. This will form a human rights standard against which Australia’s conduct can be judged. Phase 2 focuses upon the current regulatory environment. If existing regulatory vehicles adequately protect the health and safety of employees, then Australia will have discharged its obligations through simply maintaining the status quo. Australia currently regulates OHS through a combination of ‘hard law’ and ‘soft law’ regulatory vehicles. The first part of phase 2 analyses the effectiveness of traditional OHS laws in Australia and in China. The final part of phase 2 then analyses the effectiveness of the major soft law vehicle ‘Corporate Social Responsibility’ (CSR). The fact that employees are working in unsafe working conditions does not mean Australia is breaching its human rights obligations. Australia is only required to take reasonably practicable steps to ensure human rights are realized. Phase 3 identifies four regulatory vehicles to determine whether they would assist Australia in discharging its human rights obligations. Phase 3 then analyses whether Australia could unilaterally introduce supply chain regulation to regulate domestic and extraterritorial supply chains. Phase 3 also analyses three public international law regulatory vehicles. This chapter considers the ability of the United Nations Global Compact, the ILO’s Better Factory Project and a bilateral agreement to improve the detection and enforcement of workers’ right to safety and health.
Resumo:
The LiteSteel Beam (LSB) is a new hollow flange channel section developed by OneSteel Australian Tube Mills using a patented Dual Electric Resistance Welding technique. The LSB has a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. It is commonly used as rafters, floor joists and bearers and roof beams in residential, industrial and commercial buildings. It is on average 40% lighter than traditional hot-rolled steel beams of equivalent performance. The LSB flexural members are subjected to a relatively new Lateral Distortional Buckling mode, which reduces the member moment capacity. Unlike the commonly observed lateral torsional buckling of steel beams, lateral distortional buckling of LSBs is characterised by simultaneous lateral deflection, twist and web distortion. Current member moment capacity design rules for lateral distortional buckling in AS/NZS 4600 (SA, 2005) do not include the effect of section geometry of hollow flange beams although its effect is considered to be important. Therefore detailed experimental and finite element analyses (FEA) were carried out to investigate the lateral distortional buckling behaviour of LSBs including the effect of section geometry. The results showed that the current design rules in AS/NZS 4600 (SA, 2005) are over-conservative in the inelastic lateral buckling region. New improved design rules were therefore developed for LSBs based on both FEA and experimental results. A geometrical parameter (K) defined as the ratio of the flange torsional rigidity to the major axis flexural rigidity of the web (GJf/EIxweb) was identified as the critical parameter affecting the lateral distortional buckling of hollow flange beams. The effect of section geometry was then included in the new design rules using the new parameter (K). The new design rule developed by including this parameter was found to be accurate in calculating the member moment capacities of not only LSBs, but also other types of hollow flange steel beams such as Hollow Flange Beams (HFBs), Monosymmetric Hollow Flange Beams (MHFBs) and Rectangular Hollow Flange Beams (RHFBs). The inelastic reserve bending capacity of LSBs has not been investigated yet although the section moment capacity tests of LSBs in the past revealed that inelastic reserve bending capacity is present in LSBs. However, the Australian and American cold-formed steel design codes limit them to the first yield moment. Therefore both experimental and FEA were carried out to investigate the section moment capacity behaviour of LSBs. A comparison of the section moment capacity results from FEA, experiments and current cold-formed steel design codes showed that compact and non-compact LSB sections classified based on AS 4100 (SA, 1998) have some inelastic reserve capacity while slender LSBs do not have any inelastic reserve capacity beyond their first yield moment. It was found that Shifferaw and Schafer’s (2008) proposed equations and Eurocode 3 Part 1.3 (ECS, 2006) design equations can be used to include the inelastic bending capacities of compact and non-compact LSBs in design. As a simple design approach, the section moment capacity of compact LSB sections can be taken as 1.10 times their first yield moment while it is the first yield moment for non-compact sections. For slender LSB sections, current cold-formed steel codes can be used to predict their section moment capacities. It was believed that the use of transverse web stiffeners could improve the lateral distortional buckling moment capacities of LSBs. However, currently there are no design equations to predict the elastic lateral distortional buckling and member moment capacities of LSBs with web stiffeners under uniform moment conditions. Therefore, a detailed study was conducted using FEA to simulate both experimental and ideal conditions of LSB flexural members. It was shown that the use of 3 to 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges of LSBs at third span points and supports provided an optimum web stiffener arrangement. Suitable design rules were developed to calculate the improved elastic buckling and ultimate moment capacities of LSBs with these optimum web stiffeners. A design rule using the geometrical parameter K was also developed to improve the accuracy of ultimate moment capacity predictions. This thesis presents the details and results of the experimental and numerical studies of the section and member moment capacities of LSBs conducted in this research. It includes the recommendations made regarding the accuracy of current design rules as well as the new design rules for lateral distortional buckling. The new design rules include the effects of section geometry of hollow flange steel beams. This thesis also developed a method of using web stiffeners to reduce the lateral distortional buckling effects, and associated design rules to calculate the improved moment capacities.
Resumo:
This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.
Resumo:
Embedded generalized markup, as applied by digital humanists to the recording and studying of our textual cultural heritage, suffers from a number of serious technical drawbacks. As a result of its evolution from early printer control languages, generalized markup can only express a document’s ‘logical’ structure via a repertoire of permissible printed format structures. In addition to the well-researched overlap problem, the embedding of markup codes into texts that never had them when written leads to a number of further difficulties: the inclusion of potentially obsolescent technical and subjective information into texts that are supposed to be archivable for the long term, the manual encoding of information that could be better computed automatically, and the obscuring of the text by highly complex technical data. Many of these problems can be alleviated by asserting a separation between the versions of which many cultural heritage texts are composed, and their content. In this way the complex inter-connections between versions can be handled automatically, leaving only simple markup for individual versions to be handled by the user.
Resumo:
In this chapter, a rationale is developed for incorporating philosophy into teacher training programs as a means of both preparing quality teachers for the 21st century and meeting the expectations detailed in the professional standards established by the statutory authority that regulates the profession in Queensland, the Queensland College of Teaching is presented. Furthermore, in-service teachers from Buranda State School, a Brisbane primary school that has been successfully teaching philosophy to its students for over 10 years, shares their experiences of teaching philosophy and how it has enhanced student learning and the quality of teaching and professionalism of the teachers. Finally, the implications of embedding philosophy into teacher training programs are explored in terms of developing the personal integrity of beginning teachers.
Resumo:
Curriculum demands continue to increase on school education systems with teachers at the forefront of implementing syllabus requirements. Education is reported frequently as a solution to most societal problems and, as a result of the world’s information explosion, teachers are expected to cover more and more within teaching programs. How can teachers combine subjects in order to capitalise on the competing educational agendas within school timeframes? Fusing curricula requires the bonding of standards from two or more syllabuses. Both technology and ICT complement the learning of science. This study analyses selected examples of preservice teachers’ overviews for fusing science, technology and ICT. These program overviews focused on primary students and the achievement of two standards (one from science and one from either technology or ICT). These primary preservice teachers’ fused-curricula overviews included scientific concepts and related technology and/or ICT skills and knowledge. Findings indicated a range of innovative curriculum plans for teaching primary science through technology and ICT, demonstrating that these subjects can form cohesive links towards achieving the respective learning standards. Teachers can work more astutely by fusing curricula; however further professional development may be required to advance thinking about these processes. Bonding subjects through their learning standards can extend beyond previous integration or thematic work where standards may not have been assessed. Education systems need to articulate through syllabus documents how effective fusing of curricula can be achieved. It appears that education is a key avenue for addressing societal needs, problems and issues. Education is promoted as a universal solution, which has resulted in curriculum overload (Dare, Durand, Moeller, & Washington, 1997; Vinson, 2001). Societal and curriculum demands have placed added pressure on teachers with many extenuating education issues increasing teachers’ workloads (Mobilise for Public Education, 2002). For example, as Australia has weather conducive for outdoor activities, social problems and issues arise that are reported through the media calling for action; consequently schools have been involved in swimming programs, road and bicycle safety programs, and a wide range of activities that had been considered a parental responsibility in the past. Teachers are expected to plan, implement and assess these extra-curricula activities within their already overcrowded timetables. At the same stage, key learning areas (KLAs) such as science and technology are mandatory requirements within all Australian education systems. These systems have syllabuses outlining levels of content and the anticipated learning outcomes (also known as standards, essential learnings, and frameworks). Time allocated for teaching science in obviously an issue. In 2001, it was estimated that on average the time spent in teaching science in Australian Primary Schools was almost an hour per week (Goodrum, Hackling, & Rennie, 2001). More recently, a study undertaken in the U.S. reported a similar finding. More than 80% of the teachers in K-5 classrooms spent less than an hour teaching science (Dorph, Goldstein, Lee, et al., 2007). More importantly, 16% did not spend teaching science in their classrooms. Teachers need to learn to work smarter by optimising the use of their in-class time. Integration is proposed as one of the ways to address the issue of curriculum overload (Venville & Dawson, 2005; Vogler, 2003). Even though there may be a lack of definition for integration (Hurley, 2001), curriculum integration aims at covering key concepts in two or more subject areas within the same lesson (Buxton & Whatley, 2002). This implies covering the curriculum in less time than if the subjects were taught separately; therefore teachers should have more time to cover other educational issues. Expectedly, the reality can be decidedly different (e.g., Brophy & Alleman, 1991; Venville & Dawson, 2005). Nevertheless, teachers report that students expand their knowledge and skills as a result of subject integration (James, Lamb, Householder, & Bailey, 2000). There seems to be considerable value for integrating science with other KLAs besides aiming to address teaching workloads. Over two decades ago, Cohen and Staley (1982) claimed that integration can bring a subject into the primary curriculum that may be otherwise left out. Integrating science education aims to develop a more holistic perspective. Indeed, life is not neat components of stand-alone subjects; life integrates subject content in numerous ways, and curriculum integration can assist students to make these real-life connections (Burnett & Wichman, 1997). Science integration can provide the scope for real-life learning and the possibility of targeting students’ learning styles more effectively by providing more than one perspective (Hudson & Hudson, 2001). To illustrate, technology is essential to science education (Blueford & Rosenbloom, 2003; Board of Studies, 1999; Penick, 2002), and constructing technology immediately evokes a social purpose for such construction (Marker, 1992). For example, building a model windmill requires science and technology (Zubrowski, 2002) but has a key focus on sustainability and the social sciences. Science has the potential to be integrated with all KLAs (e.g., Cohen & Staley, 1982; Dobbs, 1995; James et al., 2000). Yet, “integration” appears to be a confusing term. Integration has an educational meaning focused on special education students being assimilated into mainstream classrooms. The word integration was used in the late seventies and generally focused around thematic approaches for teaching. For instance, a science theme about flight only has to have a student drawing a picture of plane to show integration; it did not connect the anticipated outcomes from science and art. The term “fusing curricula” presents a seamless bonding between two subjects; hence standards (or outcomes) need to be linked from both subjects. This also goes beyond just embedding one subject within another. Embedding implies that one subject is dominant, while fusing curricula proposes an equal mix of learning within both subject areas. Primary education in Queensland has eight KLAs, each with its established content and each with a proposed structure for levels of learning. Primary teachers attempt to cover these syllabus requirements across the eight KLAs in less than five hours a day, and between many of the extra-curricula activities occurring throughout a school year (e.g., Easter activities, Education Week, concerts, excursions, performances). In Australia, education systems have developed standards for all KLAs (e.g., Education Queensland, NSW Department of Education and Training, Victorian Education) usually designated by a code. In the late 1990’s (in Queensland), “core learning outcomes” for strands across all KLA’s. For example, LL2.1 for the Queensland Education science syllabus means Life and Living at Level 2 standard number 1. Thus, a teacher’s planning requires the inclusion of standards as indicated by the presiding syllabus. More recently, the core learning outcomes were replaced by “essential learnings”. They specify “what students should be taught and what is important for students to have opportunities to know, understand and be able to do” (Queensland Studies Authority, 2009, para. 1). Fusing science education with other KLAs may facilitate more efficient use of time and resources; however this type of planning needs to combine standards from two syllabuses. To further assist in facilitating sound pedagogical practices, there are models proposed for learning science, technology and other KLAs such as Bloom’s Taxonomy (Bloom, 1956), Productive Pedagogies (Education Queensland, 2004), de Bono’s Six Hats (de Bono, 1985), and Gardner’s Multiple Intelligences (Gardner, 1999) that imply, warrant, or necessitate fused curricula. Bybee’s 5 Es, for example, has five levels of learning (engage, explore, explain, elaborate, and evaluate; Bybee, 1997) can have the potential for fusing science and ICT standards.