987 resultados para Bismuth based powders
Resumo:
Context-based chemistry education aims to improve student interest and motivation in chemistry by connecting canonical chemistry concepts with real-world contexts. Implementation of context-based chemistry programmes began 20 years ago in an attempt to make the learning of chemistry meaningful for students. This paper reviews such programmes through empirical studies on six international courses, ChemCom (USA), Salters (UK), Industrial Science (Israel), Chemie im Kontext (Germany), Chemistry in Practice (The Netherlands) and PLON (The Netherlands). These studies are categorised through emergent characteristics of: relevance, interest/attitudes motivation and deeper understanding. These characteristics can be found to an extent in a number of other curricular initiatives, such as science-technology-society approaches and problem-based learning or project based science, the latter of which often incorporates an inquiry-based approach to science education. These initiatives in science education are also considered with a focus on the characteristics of these approaches that are emphasised in context-based education. While such curricular studies provide a starting point for discussing context-based approaches in chemistry, to advance our understanding of how students connect canonical science concepts with the real-world context, a new theoretical framework is required. A dialectical sociocultural framework originating in the work of Vygotsky is used as a referent for analysing the complex human interactions that occur in context-based classrooms, providing teachers with recent information about the pedagogical structures and resources that afford students the agency to learn.
Investigating higher education and secondary school web-based learning environments using the WEBLEI
Resumo:
Classroom learning environments are rapidly changing as new digital technologies become more education-friendly. What are students’ perceptions of their technology-rich learning environments? This question is critical as it may have an impact on the effectiveness of the new technologies in classrooms. There are numerous reliable and valid learning environment instruments which have been used to ascertain students’ perceptions of their learning environments. This chapter focuses on one of these instruments, the Web-based Learning Environment Instrument (WEBLEI) (Chang & Fisher, 2003). Since its initial development, this instrument has been used to study a range of learning environments and this chapter presents the findings of two example case-studies that involve such environments.
Resumo:
Road dust contain potentially toxic pollutants originating from a range of anthropogenic sources common to urban land uses and soil inputs from surrounding areas. The research study analysed the mineralogy and morphology of dust samples from road surfaces from different land uses and background soil samples to characterise the relative source contributions to road dust. The road dust consist primarily of soil derived minerals (60%) with quartz averaging 40-50% and remainder being clay forming minerals of albite, microcline, chlorite and muscovite originating from surrounding soils. About 2% was organic matter primarily originating from plant matter. Potentially toxic pollutants represented about 30% of the build-up. These pollutants consist of brake and tire wear, combustion emissions and fly ash from asphalt. Heavy metals such as Zn, Cu, Pb, Ni, Cr and Cd primarily originate from vehicular traffic while Fe, Al and Mn primarily originate from surrounding soils. The research study confirmed the significant contribution of vehicular traffic to dust deposited on urban road surfaces.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.
Resumo:
Honing and Ladinig (2008) make the assertion that while the internal validity of web-based studies may be reduced, this is offset by an increase in external validity possible when experimenters can sample a wider range of participants and experimental settings. In this paper, the issue of internal validity is more closely examined, and it is agued that there is no necessary reason why internal validity of a web-based study should be worse than that of a lab-based one. Errors of measurement or inconsistencies of manipulation will typically balance across conditions of the experiment, and thus need not necessarily threaten the validity of a study’s findings.
Resumo:
The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics.
Resumo:
The literature supporting the notion that active, student-centered learning is superior to passive, teacher-centered instruction is encyclopedic (Bonwell & Eison, 1991; Bruning, Schraw, & Ronning, 1999; Haile, 1997a, 1997b, 1998; Johnson, Johnson, & Smith, 1999). Previous action research demonstrated that introducing a learning activity in class improved the learning outcomes of students (Mejias, 2010). People acquire knowledge and skills through practice and reflection, not by watching and listening to others telling them how to do something. In this context, this project aims to find more insights about the level of interactivity in the curriculum a class should have and its alignment with assessment so the intended learning outcomes (ILOs) are achieved. In this project, interactivity is implemented in the form of problem- based learning (PBL). I present the argument that a more continuous formative feedback when implemented with the correct amount of PBL stimulates student engagement bringing enormous benefits to student learning. Different levels of practical work (PBL) were implemented together with two different assessment approaches in two subjects. The outcomes were measured using qualitative and quantitative data to evaluate the levels of student engagement and satisfaction in the terms of ILOs.
Resumo:
This paper presents an innovative prognostics model based on health state probability estimation embedded in the closed loop diagnostic and prognostic system. To employ an appropriate classifier for health state probability estimation in the proposed prognostic model, the comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault levels of three faults in HP-LNG pump. Two sets of impeller-rubbing data were employed for the prediction of pump remnant life based on estimation of discrete health state probability using an outstanding capability of SVM and a feature selection technique. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
Background: There are inequalities in geographical access and delivery of health care services in Australia, particularly for cardiovascular disease (CVD), Australia's major cause of death. Analyses and models that can inform and positively influence strategies to augment services and preventative measures are needed. The Cardiac-ARIA project is using geographical spatial technology (GIS) to develop a national index for each of Australia's 13,000 population centres. The index will describe the spatial distribution of CVD health care services available to support populations at risk, in a timely manner, after a major cardiac event. Methods: In the initial phase of the project, an expert panel of cardiologists and an emergency physician have identified key elements of national and international guidelines for management of acute coronary syndromes, cardiac arrest, life-threatening arrhythmias and acute heart failure, from the time of onset (potentially dial 000) to return from the hospital to the community (cardiac rehabilitation). Results: A systematic search has been undertaken to identify the geographical location of, and type of, cardiac services currently available. This has enabled derivation of a master dataset of necessary services, e.g. telephone networks, ambulance, RFDS, helicopter retrieval services, road networks, hospitals, general practitioners, medical community centres, pathology services, CCUs, catheterisation laboratories, cardio-thoracic surgery units and cardiac rehabilitation services. Conclusion: This unique and innovative project has the potential to deliver a powerful tool to both highlight and combat the burden of disease of CVD in urban and regional Australia.
Resumo:
Distributed generators (DGs) are defined as generators that are connected to a distribution network. The direction of the power flow and short-circuit current in a network could be changed compared with one without DGs. The conventional protective relay scheme does not meet the requirement in this emerging situation. As the number and capacity of DGs in the distribution network increase, the problem of coordinating protective relays becomes more challenging. Given this background, the protective relay coordination problem in distribution systems is investigated, with directional overcurrent relays taken as an example, and formulated as a mixed integer nonlinear programming problem. A mathematical model describing this problem is first developed, and the well-developed differential evolution algorithm is then used to solve it. Finally, a sample system is used to demonstrate the feasiblity and efficiency of the developed method.
Resumo:
This paper demonstrates an experimental study that examines the accuracy of various information retrieval techniques for Web service discovery. The main goal of this research is to evaluate algorithms for semantic web service discovery. The evaluation is comprehensively benchmarked using more than 1,700 real-world WSDL documents from INEX 2010 Web Service Discovery Track dataset. For automatic search, we successfully use Latent Semantic Analysis and BM25 to perform Web service discovery. Moreover, we provide linking analysis which automatically links possible atomic Web services to meet the complex requirements of users. Our fusion engine recommends a final result to users. Our experiments show that linking analysis can improve the overall performance of Web service discovery. We also find that keyword-based search can quickly return results but it has limitation of understanding users’ goals.