890 resultados para Voting-machines.
Resumo:
Objectives The objective of this study was to develop process quality indicators (PQIs) to support the improvement of care services for older people with cognitive impairment in emergency departments (ED). Methods A structured research approach was taken for the development of PQIs for the care of older people with cognitive impairment in EDs, including combining available evidence with expert opinion (phase 1), a field study (phase 2), and formal voting (phase 3). A systematic review of the literature identified ED processes targeting the specific care needs of older people with cognitive impairment. Existing relevant PQIs were also included. By integrating the scientific evidence and clinical expertise, new PQIs were drafted and, along with the existing PQIs, extensively discussed by an advisory panel. These indicators were field tested in eight hospitals using a cohort of older persons aged 70 years and older. After analysis of the field study data (indicator prevalence, variability across sites), in a second meeting, the advisory panel further defined the PQIs. The advisory panel formally voted for selection of those PQIs that were most appropriate for care evaluation. Results In addition to seven previously published PQIs relevant to the care of older persons, 15 new indicators were created. These 22 PQIs were then field tested. PQIs designed specifically for the older ED population with cognitive impairment were only scored for patients with identified cognitive impairment. Following formal voting, a total of 11 PQIs were included in the set. These PQIs targeted cognitive screening, delirium screening, delirium risk assessment, evaluation of acute change in mental status, delirium etiology, proxy notification, collateral history, involvement of a nominated support person, pain assessment, postdischarge follow-up, and ED length of stay. Conclusions This article presents a set of PQIs for the evaluation of the care for older people with cognitive impairment in EDs. The variation in indicator triggering across different ED sites suggests that there are opportunities for quality improvement in care for this vulnerable group. Applied PQIs will identify an emergency services' implementation of care strategies for cognitively impaired older ED patients. Awareness of the PQI triggers at an ED level enables implementation of targeted interventions to improve any suboptimal processes of care. Further validation and utility of the indicators in a wider population is now indicated.
Resumo:
Objectives The purpose of this study was to identify the structural quality of care domains and to establish a set of structural quality indicators (SQIs) for the assessment of care of older people with cognitive impairment in emergency departments (EDs). Methods A structured approach to SQI development was undertaken including: 1) a comprehensive search of peer-reviewed and gray literature focusing on identification of evidence-based interventions targeting structure of care of older patients with cognitive impairment and existing SQIs; 2) a consultative process engaging experts in the care of older people and epidemiologic methods (i.e., advisory panel) leading to development of a draft set of SQIs; 3) field testing of drafted SQIs in eight EDs, leading to refinement of the SQI set, and; 4) an independent voting process among the panelists for SQI inclusion in a final set, using preestablished inclusion and exclusion criteria. Results At the conclusion of the process, five SQIs targeting the management of older ED patients with cognitive impairment were developed: 1) the ED has a policy outlining the management of older people with cognitive impairment during the ED episode of care; 2) the ED has a policy outlining issues relevant to carers of older people with cognitive impairment, encompassing the need to include the (family) carer in the ED episode of care; 3) the ED has a policy outlining the assessment and management of behavioral symptoms, with specific reference to older people with cognitive impairment; 4) the ED has a policy outlining delirium prevention strategies, including the assessment of patients' delirium risk factors, and; 5) the ED has a policy outlining pain assessment and management for older people with cognitive impairment. Conclusions This article presents a set of SQIs for the evaluation of performance in caring for older people with cognitive impairment in EDs.
Resumo:
In this paper, we propose a highly reliable fault diagnosis scheme for incipient low-speed rolling element bearing failures. The scheme consists of fault feature calculation, discriminative fault feature analysis, and fault classification. The proposed approach first computes wavelet-based fault features, including the respective relative wavelet packet node energy and entropy, by applying a wavelet packet transform to an incoming acoustic emission signal. The most discriminative fault features are then filtered from the originally produced feature vector by using discriminative fault feature analysis based on a binary bat algorithm (BBA). Finally, the proposed approach employs one-against-all multiclass support vector machines to identify multiple low-speed rolling element bearing defects. This study compares the proposed BBA-based dimensionality reduction scheme with four other dimensionality reduction methodologies in terms of classification performance. Experimental results show that the proposed methodology is superior to other dimensionality reduction approaches, yielding an average classification accuracy of 94.9%, 95.8%, and 98.4% under bearing rotational speeds at 20 revolutions-per-minute (RPM), 80 RPM, and 140 RPM, respectively.
Resumo:
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Resumo:
The mining industry presents us with a number of ideal applications for sensor based machine control because of the unstructured environment that exists within each mine. The aim of the research presented here is to increase the productivity of existing large compliant mining machines by retrofitting with enhanced sensing and control technology. The current research focusses on the automatic control of the swing motion cycle of a dragline and an automated roof bolting system. We have achieved: * closed-loop swing control of an one-tenth scale model dragline; * single degree of freedom closed-loop visual control of an electro-hydraulic manipulator in the lab developed from standard components.
Resumo:
This paper offers one explanation for the institutional basis of food insecurity in Australia, and argues that while alternative food networks and the food sovereignty movement perform a valuable function in building forms of social solidarity between urban consumers and rural producers, they currently make only a minor contribution to Australia’s food and nutrition security. The paper begins by identifying two key drivers of food security: household incomes (on the demand side) and nutrition-sensitive, ‘fair food’ agriculture (on the supply side). We focus on this second driver and argue that healthy populations require an agricultural sector that delivers dietary diversity via a fair and sustainable food system. In order to understand why nutrition-sensitive, fair food agriculture is not flourishing in Australia we introduce the development economics theory of urban bias. According to this theory, governments support capital intensive rather than labour intensive agriculture in order to deliver cheap food alongside the transfer of public revenues gained from rural agriculture to urban infrastructure, where the majority of the voting public resides. We chart the unfolding of the Urban Bias across the twentieth century and its consolidation through neo-liberal orthodoxy, and argue that agricultural policies do little to sustain, let alone revitalize, rural and regional Australia. We conclude that by observing food system dynamics through a re-spatialized lens, Urban Bias Theory is valuable in highlighting rural–urban socio-economic and political economy tensions, particularly regarding food system sustainability. It also sheds light on the cultural economy tensions for alternative food networks as they move beyond niche markets to simultaneously support urban food security and sustainable rural livelihoods.
Resumo:
Everything revolves around desiring-machines and the production of desire… Schizoanalysis merely asks what are the machinic, social and technical indices on a socius that open to desiring-machines (Deleuze & Guattari, 1983, pp. 380-381). Achievement tests like NAPLAN are fairly recent, yet common, education policy initiatives in much of the Western world. They intersect with, use and change pre-existing logics of education, teaching and learning. There has been much written about the form and function of these tests, the ‘stakes’ involved and the effects of their practice. This paper adopts a different “angle of vision” to ask what ‘opens’ education to these regimes of testing(Roy, 2008)? This paper builds on previous analyses of NAPLAN as a modulating machine, or a machine characterised by the increased intensity of connections and couplings. One affect can be “an existential disquiet” as “disciplinary subjects attempt to force coherence onto a disintegrating narrative of self”(Thompson & Cook, 2012, p. 576). Desire operates at all levels of the education assemblage, however our argument is that achievement testing manifests desire as ‘lack’; seen in the desire for improved results, the desire for increased control, the desire for freedom, the desire for acceptance to name a few. For Deleuze and Guattari desire is irreducible to lack, instead desire is productive. As a productive assemblage, education machines operationalise and produce through desire; “Desire is a machine, and the object of the desire is another machine connected to it”(Deleuze & Guattari, 1983, p. 26). This intersection is complexified by the strata at which they occur, the molar and molecular connections and flows they make possible. Our argument is that when attention is paid to the macro and micro connections, the machines built and disassembled as a result of high-stakes testing, a map is constructed that outlines possibilities, desires and blockages within the education assemblage. This schizoanalytic cartography suggests a new analysis of these ‘axioms’ of testing and accountability. It follows the flows and disruptions made possible as different or altered connections are made and as new machines are brought online. Thinking of education machinically requires recognising that “every machine functions as a break in the flow in relation to the machine to which it is connected, but at the same time is also a flow itself, or the production of flow, in relation to the machine connected to it”(Deleuze & Guattari, 1983, p. 37). Through its potential to map desire, desire-production and the production of desire within those assemblages that have come to dominate our understanding of what is possible, Deleuze and Guattari’s method of schizoanalysis provides a provocative lens for grappling with the question of what one can do, and what lines of flight are possible.
Resumo:
This paper applies concepts Deleuze developed in his ‘Postscript on the Societies of Control’, especially those relating to modulatory power, dividuation and control, to aspects of Australian schooling to explore how this transition is manifesting itself. Two modulatory machines of assessment, NAPLAN and My Schools, are examined as a means to better understand how the disciplinary institution is changing as a result of modulation. This transition from discipline to modulation is visible in the declining importance of the disciplinary teacher–student relationship as a measure of the success of the educative process. The transition occurs through seduction because that which purports to measure classroom quality is in fact a serpent of modulation that produces simulacra of the disciplinary classroom. The effect is to sever what happens in the disciplinary space from its representations in a luminiferous ether that overlays the classroom.
Resumo:
Legal Context In the wake of the Copenhagen Accord 2009 and the Cancun Agreements 2010, a number of patent offices have introduced fast-track mechanisms to encourage patent applications in relation to clean technologies - such as those pertaining to hydrogen. However, patent offices will be under increasing pressure to ensure that the granted patents satisfy the requisite patent thresholds, as well as to identify and reject cases of fraud, hoaxes, scams, and swindles. Key Points This article examines the BlackLight litigation in the United States, the United Kingdom, and the European Patent Office, and considers how patent offices and courts deal with patent applications in respect of clean energy and perpetual motion machines. Practical Significance The capacity of patent offices to grant sound and reliable patents is critical to the credibility of the patent system, particularly in the context of the current focus upon promoting clean technologies.
Resumo:
The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations
Resumo:
Network topology and routing are two important factors in determining the communication costs of big data applications at large scale. As for a given Cluster, Cloud, or Grid system, the network topology is fixed and static or dynamic routing protocols are preinstalled to direct the network traffic. Users cannot change them once the system is deployed. Hence, it is hard for application developers to identify the optimal network topology and routing algorithm for their applications with distinct communication patterns. In this study, we design a CCG virtual system (CCGVS), which first uses container-based virtualization to allow users to create a farm of lightweight virtual machines on a single host. Then, it uses software-defined networking (SDN) technique to control the network traffic among these virtual machines. Users can change the network topology and control the network traffic programmingly, thereby enabling application developers to evaluate their applications on the same system with different network topologies and routing algorithms. The preliminary experimental results through both synthetic big data programs and NPB benchmarks have shown that CCGVS can represent application performance variations caused by network topology and routing algorithm.
Resumo:
A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.