250 resultados para Machine-tools


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through practice-led research, TESSA SMALLHORN examines the influence of digital technology on the performance space. From the mechanisation of modernist culture to the digitalisation of present day, technology acts as response material for scenographers investigating the stage as machine. The interactive, real-time tools of digital culture encourage a systems-orientated approach that challenges user and operator alike. This article explores the studio practice and critical theory that was combined to offer a functional model of a digital stage machine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent decision Association for Molecular Pathology v. Myriad Genetics1, the US Supreme Court held that naturally occurring sequences from human genomic DNA are not patentable subject matter. Only certain complementary DNAs (cDNA), modified sequences and methods to use sequences are potentially patentable. It is likely that this distinction will hold for all DNA sequences, whether animal, plant or microbial2. However, it is not clear whether this means that other naturally occurring informational molecules, such as polypeptides (proteins) or polysaccharides, will also be excluded from patents. The decision underscores a pressing need for precise analysis of patents that disclose and reference genetic sequences, especially in the claims. Similarly, data sets, standards compliance and analytical tools must be improved—in particular, data sets and analytical tools must be made openly accessible—in order to provide a basis for effective decision making and policy setting to support biological innovation. Here, we present a web-based platform that allows such data aggregation, analysis and visualization in an open, shareable facility. To demonstrate the potential for the extension of this platform to global patent jurisdictions, we discuss the results of a global survey of patent offices that shows that much progress is still needed in making these data freely available for aggregation in the first place.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper makes a case for thinking about the primary school as a logic machine (apparatus) as a way of thinking about processes of in-school stratification. Firstly we discuss related literature on in-school stratification in primary schools, particularly as it relates to literacy learning. Secondly we explain how school reform can be thought about in terms of the idea of the machine or apparatus. In which case the processes of in-school stratification can be mapped as more than simply concerns about school organisation (such as students grouping) but also involve a politics of truth, played out in each school, that constitutes school culture and what counts as ‘good’ pedagogy. Thirdly, the chapter will focus specifically on research conducted into primary schools in the Northern Suburbs of Adelaide, one of the most educationally disadvantaged regions in Australia, as a case study of the relationship between in-school stratification and the reproduction of inequality. We will draw from more than 20 years of ethnographic work in primary school in the northern suburbs of Adelaide and provide a snapshot of a recent attempt to improve literacy achievement in a few Northern Suburbs public primary schools (SILA project). The SILA project, through diagnostic reviews, has provided a significant analysis of the challenges facing policy and practice in such challenging school contexts that also maps onto existing (inter)national research. These diagnostic reviews said ‘hard things’ that required attention by SILA schools and these included: · an over reliance on whole class, low level, routine tasks and hence a lack of challenge and rigour in the learning tasks offered to students ; · a focus on the 'code breaking' function of language at the expense of richer conceptualisations of literacy that might guide teachers’ understanding of challenging pedagogies ; · the need for substantial shifts in the culture of schools, especially unsettling deficit views of students and their communities ; · a need to provide a more ‘consistent’ approach to teaching literacy across the school; · a need to focus School Improvement Plans in order to implement a clear focus on literacy learning; and, · a need to sustain professional learning to produce new knowledge and practice . The paper will conclude with suggestions for further research and possible reform projects into the primary school as a logic machine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this age of rapidly evolving technology, teachers are encouraged to adopt ICTs by government, syllabus, school management, and parents. Indeed, it is an expectation that teachers will incorporate technologies into their classroom teaching practices to enhance the learning experiences and outcomes of their students. In particular, regarding the science classroom, a subject that traditionally incorporates hands-on experiments and practicals, the integration of modern technologies should be a major feature. Although myriad studies report on technologies that enhance students’ learning outcomes in science, there is a dearth of literature on how teachers go about selecting technologies for use in the science classroom. Teachers can feel ill prepared to assess the range of available choices and might feel pressured and somewhat overwhelmed by the avalanche of new developments thrust before them in marketing literature and teaching journals. The consequences of making bad decisions are costly in terms of money, time and teacher confidence. Additionally, no research to date has identified what technologies science teachers use on a regular basis, and whether some purchased technologies have proven to be too problematic, preventing their sustained use and possible wider adoption. The primary aim of this study was to provide research-based guidance to teachers to aid their decision-making in choosing technologies for the science classroom. The study unfolded in several phases. The first phase of the project involved survey and interview data from teachers in relation to the technologies they currently use in their science classrooms and the frequency of their use. These data were coded and analysed using Grounded Theory of Corbin and Strauss, and resulted in the development of a PETTaL model that captured the salient factors of the data. This model incorporated usability theory from the Human Computer Interaction literature, and education theory and models such as Mishra and Koehler’s (2006) TPACK model, where the grounded data indicated these issues. The PETTaL model identifies Power (school management, syllabus etc.), Environment (classroom / learning setting), Teacher (personal characteristics, experience, epistemology), Technology (usability, versatility etc.,) and Learners (academic ability, diversity, behaviour etc.,) as fields that can impact the use of technology in science classrooms. The PETTaL model was used to create a Predictive Evaluation Tool (PET): a tool designed to assist teachers in choosing technologies, particularly for science teaching and learning. The evolution of the PET was cyclical (employing agile development methodology), involving repeated testing with in-service and pre-service teachers at each iteration, and incorporating their comments i ii in subsequent versions. Once no new suggestions were forthcoming, the PET was tested with eight in-service teachers, and the results showed that the PET outcomes obtained by (experienced) teachers concurred with their instinctive evaluations. They felt the PET would be a valuable tool when considering new technology, and it would be particularly useful as a means of communicating perceived value between colleagues and between budget holders and requestors during the acquisition process. It is hoped that the PET could make the tacit knowledge acquired by experienced teachers about technology use in classrooms explicit to novice teachers. Additionally, the PET could be used as a research tool to discover a teachers’ professional development needs. Therefore, the outcomes of this study can aid a teacher in the process of selecting educationally productive and sustainable new technology for their science classrooms. This study has produced an instrument for assisting teachers in the decision-making process associated with the use of new technologies for the science classroom. The instrument is generic in that it can be applied to all subject areas. Further, this study has produced a powerful model that extends the TPACK model, which is currently extensively employed to assess teachers’ use of technology in the classroom. The PETTaL model grounded in data from this study, responds to the calls in the literature for TPACK’s further development. As a theoretical model, PETTaL has the potential to serve as a framework for the development of a teacher’s reflective practice (either self evaluation or critical evaluation of observed teaching practices). Additionally, PETTaL has the potential for aiding the formulation of a teacher’s personal professional development plan. It will be the basis for further studies in this field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Appropriate disposition of emergency department (ED) patients with chest pain is dependent on clinical evaluation of risk. A number of chest pain risk stratification tools have been proposed. The aim of this study was to compare the predictive performance for major adverse cardiac events (MACE) using risk assessment tools from the National Heart Foundation of Australia (HFA), the Goldman risk score and the Thrombolysis in Myocardial Infarction risk score (TIMI RS). Methods: This prospective observational study evaluated ED patients aged ≥30 years with non-traumatic chest pain for which no definitive non-ischemic cause was found. Data collected included demographic and clinical information, investigation findings and occurrence of MACE by 30 days. The outcome of interest was the comparative predictive performance of the risk tools for MACE at 30 days, as analyzed by receiver operator curves (ROC). Results: Two hundred eighty-one patients were studied; the rate of MACE was 14.1%. Area under the curve (AUC) of the HFA, TIMI RS and Goldman tools for the endpoint of MACE was 0.54, 0.71 and 0.67, respectively, with the difference between the tools in predictive ability for MACE being highly significant [chi2 (3) = 67.21, N = 276, p < 0.0001]. Conclusion: The TIMI RS and Goldman tools performed better than the HFA in this undifferentiated ED chest pain population, but selection of cutoffs balancing sensitivity and specificity was problematic. There is an urgent need for validated risk stratification tools specific for the ED chest pain population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter was developed as part of the ‘People, communities and economies of the Lake Eyre Basin’ project. It has been written for communities, government agencies and interface organisations involved in natural resource management (NRM) in the Lake Eyre Basin (LEB). Its purpose is to identify the key factors for successful community engagement processes relevant to the LEB and present tools and principles for successful engagement processes. The term ‘interface organisation’ is used here to refer to the diverse range of local and regional organisations (such as Catchment Committees or NRM Regional Bodies) that serve as linkages, or translators, between local communities and broader Australian and State Governments. The importance of fostering and harnessing effective processes of community engagement has been identified as crucial to building a prosperous future for rural and remote regions in Australia. The chapter presents an overview of the literature on successful community engagement processes for NRM, as well as an overview of the current NRM arrangements in the LEB. The main part of the chapter presents findings of the series of interviews conducted with the government liaison officers representing both state and federal organisations who are responsible for coordinating and facilitating regional NRM in the LEB, and with the members of communities of the LEB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is motivated by the need to efficiently machine the edges of ophthalmic polymer lenses for mounting in spectacle or instrument frames. The polymer materials used are required to have suitable optical characteristics such high refractive index and Abbe number, combined with low density and high scratch and impact resistance. Edge surface finish is an important aesthetic consideration; its quality is governed by the material removal operation and the physical properties of the material being processed. The wear behaviour of polymer materials is not as straightforward as for other materials due to their molecular and structural complexity, not to mention their time-dependent properties. Four commercial ophthalmic polymers have been studied in this work using nanoindentation techniques which are evaluated as tools for probing surface mechanical properties in order to better understand the grinding response of polymer materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many emerging economies are dangling the patent system to stimulate bio-technological innovations with the ultimate premise that these will improve their economic and social growth. The patent system mandates full disclosure of the patented invention in exchange of a temporary exclusive patent right. Recently, however, patent offices have fallen short of complying with such a mandate, especially for genetic inventions. Most patent offices provide only static information about disclosed patent sequences and even some do not keep track of the sequence listing data in their own database. The successful partnership of QUT Library and Cambia exemplifies advocacy in Open Access, Open Innovation and User Participation. The library extends its services to various departments within the university, builds and encourages research networks to complement skills needed to make a contribution in the real world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sugar cane is a major source of food and fuel worldwide. Biotechnology has the potential to improve economically-important traits in sugar cane as well as diversify sugar cane beyond traditional applications such as sucrose production. High levels of transgene expression are key to the success of improving crops through biotechnology. Here we describe new molecular tools that both expand and improve gene expression capabilities in sugar cane. We have identified promoters that can be used to drive high levels of gene expression in the leaf and stem of transgenic sugar cane. One of these promoters, derived from the Cestrum yellow leaf curling virus, drives levels of constitutive transgene expression that are significantly higher than those achieved by the historical benchmark maize polyubiquitin-1 (Zm-Ubi1) promoter. A second promoter, the maize phosphonenolpyruvate carboxylate promoter, was found to be a strong, leaf-preferred promoter that enables levels of expression comparable to Zm-Ubi1 in this organ. Transgene expression was increased approximately 50-fold by gene modification, which included optimising the codon usage of the coding sequence to better suit sugar cane. We also describe a novel dual transcriptional enhancer that increased gene expression from different promoters, boosting expression from Zm-Ubi1 over eightfold. These molecular tools will be extremely valuable for the improvement of sugar cane through biotechnology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study set out to investigate the kinds of learning difficulties encountered by the Malaysian students and how they actually coped with online learning. The modified Online Learning Environment Survey (OLES) instrument was used to collect data from the sample of 40 Malaysian students at a university in Brisbane, Australia. A controlled group of 35 Australian students was also included for comparison purposes. Contrary to assumptions from previous researches, the findings revealed that there were only a few differences between the international Asian and Australian students with regards to their perceptions of online learning. Recommendations based on the findings of this research study were applicable for Australian universities which have Asian international students enrolled to study online.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.