440 resultados para ACCURATE
Resumo:
The purpose of this paper is to assess aspects of the British Government's attempts to use sporting participation as a vehicle to re-integrate socially disadvantaged, excluded and 'at-risk' youth into mainstream society. A number of organisations, policy-makers, commentators, and practitioners with a stake in the 'sport and social inclusion agenda' were interviewed. General agreement was found on a number of points: that the field was overly crowded with policies, programmes and initiatives; that the field worked in a 'bottom-up' way, with the most significant factor determining success being effective local workers with good networks and cultural access; that the dichotomising rhetoric of inclusion/exclusion was counter-productive; that the notion of the 'at-risk youth' was problematic and unhelpful; and that they all now dealt with a marketplace, where 'clients' had to be enrolled in their own reformation. There was also disagreement on a number of points: that policy acts as a relatively accurate template for practice, as opposed to the argument that it was simply regarded as a cluster of suggestions for practice; that policy was exceptionally piecemeal in its formulation and application, as opposed to regarding policy as necessarily targeted and dispersed; and that the inclusion agenda was largely politically driven and transitory, as opposed to the optimistic view that it had become ingrained in local practice. Finally, the paper examines some issues that are the most likely points of contribution by researchers in the area: that more research needs to be done on the processes of identity formation associated with participation in sport; that more effective programme evaluation needs to be done for such forms of governmental intervention to work properly; and that the relationship between different kinds of physical activity and social and personal change needs to be more thoroughly theorised.
Resumo:
In condition-based maintenance (CBM), effective diagnostics and prognostics are essential tools for maintenance engineers to identify imminent fault and to predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedules production if necessary. This paper presents a technique for accurate assessment of the remnant life of machines based on historical failure knowledge embedded in the closed loop diagnostic and prognostic system. The technique uses the Support Vector Machine (SVM) classifier for both fault diagnosis and evaluation of health stages of machine degradation. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for multi-class fault diagnosis. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
A fast and accurate procedure has been researched and developed for the simultaneous determination of maltol and ethyl maltol, based on their reaction with iron(III) in the presence of o-phenanthroline in sulfuric acid medium. This reaction was the basis for an indirect kinetic spectrophotometric method, which followed the development of the pink ferroin product (λmax = 524 nm). The kinetic data were collected in the 370–900 nm range over 0–30 s. The optimized method indicates that individual analytes followed Beer’s law in the concentration range of 4.0–76.0 mg L−1 for both maltol and ethyl maltol. The LOD values of 1.6 mg L−1 for maltol and 1.4 mg L−1 for ethyl maltol agree well with those obtained by the alternative high performance liquid chromatography with ultraviolet detection (HPLC-UV). Three chemometrics methods, principal component regression (PCR), partial least squares (PLS) and principal component analysis–radial basis function–artificial neural networks (PC–RBF–ANN), were used to resolve the measured data with small kinetic differences between the two analytes as reflected by the development of the pink ferroin product. All three performed satisfactorily in the case of the synthetic verification samples, and in their application for the prediction of the analytes in several food products. The figures of merit for the analytes based on the multivariate models agreed well with those from the alternative HPLC-UV method involving the same samples.
Resumo:
The application of object-based approaches to the problem of extracting vegetation information from images requires accurate delineation of individual tree crowns. This paper presents an automated method for individual tree crown detection and delineation by applying a simplified PCNN model in spectral feature space followed by post-processing using morphological reconstruction. The algorithm was tested on high resolution multi-spectral aerial images and the results are compared with two existing image segmentation algorithms. The results demonstrate that our algorithm outperforms the other two solutions with the average accuracy of 81.8%.
Resumo:
Light Detection and Ranging (LIDAR) has great potential to assist vegetation management in power line corridors by providing more accurate geometric information of the power line assets and vegetation along the corridors. However, the development of algorithms for the automatic processing of LIDAR point cloud data, in particular for feature extraction and classification of raw point cloud data, is in still in its infancy. In this paper, we take advantage of LIDAR intensity and try to classify ground and non-ground points by statistically analyzing the skewness and kurtosis of the intensity data. Moreover, the Hough transform is employed to detected power lines from the filtered object points. The experimental results show the effectiveness of our methods and indicate that better results were obtained by using LIDAR intensity data than elevation data.
Resumo:
Heart disease is attributed as the highest cause of death in the world. Although this could be alleviated by heart transplantation, there is a chronic shortage of donor hearts and so mechanical solutions are being considered. Currently, many Ventricular Assist Devices (VADs) are being developed worldwide in an effort to increase life expectancy and quality of life for end stage heart failure patients. Current pre-clinical testing methods for VADs involve laboratory testing using Mock Circulation Loops (MCLs), and in vivo testing in animal models. The research and development of highly accurate MCLs is vital to the continuous improvement of VAD performance. The first objective of this study was to develop and validate a mathematical model of a MCL. This model could then be used in the design and construction of a variable compliance chamber to improve the performance of an existing MCL as well as form the basis for a new miniaturised MCL. An extensive review of literature was carried out on MCLs and mathematical modelling of their function. A mathematical model of a MCL was then created in the MATLAB/SIMULINK environment. This model included variable features such as resistance, fluid inertia and volumes (resulting from the pipe lengths and diameters); compliance of Windkessel chambers, atria and ventricles; density of both fluid and compressed air applied to the system; gravitational effects on vertical columns of fluid; and accurately modelled actuators controlling the ventricle contraction. This model was then validated using the physical properties and pressure and flow traces produced from a previously developed MCL. A variable compliance chamber was designed to reproduce parameters determined by the mathematical model. The function of the variability was achieved by controlling the transmural pressure across a diaphragm to alter the compliance of the system. An initial prototype was tested in a previously developed MCL, and a variable level of arterial compliance was successfully produced; however, the complete range of compliance values required for accurate physiological representation was not able to be produced with this initial design. The mathematical model was then used to design a smaller physical mock circulation loop, with the tubing sizes adjusted to produce accurate pressure and flow traces whilst having an appropriate frequency response characteristic. The development of the mathematical model greatly assisted the general design of an in vitro cardiovascular device test rig, while the variable compliance chamber allowed simple and real-time manipulation of MCL compliance to allow accurate transition between a variety of physiological conditions. The newly developed MCL produced an accurate design of a mechanical representation of the human circulatory system for in vitro cardiovascular device testing and education purposes. The continued improvement of VAD test rigs is essential if VAD design is to improve, and hence improve quality of life and life expectancy for heart failure patients.
Resumo:
There is little evidence that workshops alone have a lasting impact on the day-to-day practice of participants. The current paper examined a strategy to increase generalization and maintenance of skills in the natural environment using pseudo-patients and immediate performance feedback to reinforce skills acquisition. A random half of pharmacies (N=30) took part in workshop training aimed at optimizing consumers' use of nonprescription analgesic products. Pharmacies in the training group also received performance feedback on their adherence to the recommended protocol. Feedback occurred immediately after a pseudo-patient visit in which confederates posed as purchasers of analgesics, and combined positive and corrective elements. Trained pharmacists were significantly more accurate at identifying people who misused the medication (P<0.001). The trained pharmacists were more likely than controls to use open-ended questions (P<0.001), assess readiness to change problematic use (P <0.001), and to deliver a brief intervention that was tailored to the person's commitment to alter his/her usage (P <0.001). Participants responded to the feedback positively. Results were consistent with the hypothesis that when workshop is combined with on-site performance feedback, it enhances practitioners' adherence to protocols in the natural setting.
Resumo:
Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.
Theoretical and numerical investigation of plasmon nanofocusing in metallic tapered rods and grooves
Resumo:
Effective focusing of electromagnetic (EM) energy to nanoscale regions is one of the major challenges in nano-photonics and plasmonics. The strong localization of the optical energy into regions much smaller than allowed by the diffraction limit, also called nanofocusing, offers promising applications in nano-sensor technology, nanofabrication, near-field optics or spectroscopy. One of the most promising solutions to the problem of efficient nanofocusing is related to surface plasmon propagation in metallic structures. Metallic tapered rods, commonly used as probes in near field microscopy and spectroscopy, are of a particular interest. They can provide very strong EM field enhancement at the tip due to surface plasmons (SP’s) propagating towards the tip of the tapered metal rod. A large number of studies have been devoted to the manufacturing process of tapered rods or tapered fibers coated by a metal film. On the other hand, structures such as metallic V-grooves or metal wedges can also provide strong electric field enhancements but manufacturing of these structures is still a challenge. It has been shown, however, that the attainable electric field enhancement at the apex in the V-groove is higher than at the tip of a metal tapered rod when the dissipation level in the metal is strong. Metallic V-grooves also have very promising characteristics as plasmonic waveguides. This thesis will present a thorough theoretical and numerical investigation of nanofocusing during plasmon propagation along a metal tapered rod and into a metallic V-groove. Optimal structural parameters including optimal taper angle, taper length and shape of the taper are determined in order to achieve maximum field enhancement factors at the tip of the nanofocusing structure. An analytical investigation of plasmon nanofocusing by metal tapered rods is carried out by means of the geometric optics approximation (GOA), which is also called adiabatic nanofocusing. However, GOA is applicable only for analysing tapered structures with small taper angles and without considering a terminating tip structure in order to neglect reflections. Rigorous numerical methods are employed for analysing non-adiabatic nanofocusing, by tapered rod and V-grooves with larger taper angles and with a rounded tip. These structures cannot be studied by analytical methods due to the presence of reflected waves from the taper section, the tip and also from (artificial) computational boundaries. A new method is introduced to combine the advantages of GOA and rigorous numerical methods in order to reduce significantly the use of computational resources and yet achieve accurate results for the analysis of large tapered structures, within reasonable calculation time. Detailed comparison between GOA and rigorous numerical methods will be carried out in order to find the critical taper angle of the tapered structures at which GOA is still applicable. It will be demonstrated that optimal taper angles, at which maximum field enhancements occur, coincide with the critical angles, at which GOA is still applicable. It will be shown that the applicability of GOA can be substantially expanded to include structures which could be analysed previously by numerical methods only. The influence of the rounded tip, the taper angle and the role of dissipation onto the plasmon field distribution along the tapered rod and near the tip will be analysed analytically and numerically in detail. It will be demonstrated that electric field enhancement factors of up to ~ 2500 within nanoscale regions are predicted. These are sufficient, for instance, to detect single molecules using surface enhanced Raman spectroscopy (SERS) with the tip of a tapered rod, an approach also known as tip enhanced Raman spectroscopy or TERS. The results obtained in this project will be important for applications for which strong local field enhancement factors are crucial for the performance of devices such as near field microscopes or spectroscopy. The optimal design of nanofocusing structures, at which the delivery of electromagnetic energy to the nanometer region is most efficient, will lead to new applications in near field sensors, near field measuring technology, or generation of nanometer sized energy sources. This includes: applications in tip enhanced Raman spectroscopy (TERS); manipulation of nanoparticles and molecules; efficient coupling of optical energy into and out of plasmonic circuits; second harmonic generation in non-linear optics; or delivery of energy to quantum dots, for instance, for quantum computations.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.
Resumo:
Purpose: To investigate whether wearing different presbyopic vision corrections alters the pattern of eye and head movements when viewing and responding to driving-related traffic scenes. Methods: Participants included 20 presbyopes (mean age: 56.1 ± 5.7 years) who had no experience of wearing presbyopic vision corrections, apart from single vision (SV) reading spectacles. Each participant wore five different vision corrections: distance SV lenses, progressive addition spectacle lenses (PAL), bifocal spectacle lenses (BIF), monovision (MV) and multifocal contact lenses (MTF CL). For each visual condition, participants were required to view videotape recordings of traffic scenes, track a reference vehicle, and identify a series of peripherally presented targets. Digital numerical display panels were also included as near visual stimuli (simulating the visual displays of a vehicle speedometer and radio). Eye and head movements were measured, and the accuracy of target recognition was also recorded. Results: The path length of eye movements while viewing and responding to driving-related traffic scenes was significantly longer when wearing BIF and PAL than MV and MTF CL (both p ≤ 0.013). The path length of head movements was greater with SV, BIF, and PAL than MV and MTF CL (all p < 0.001). Target recognition and brake response times were not significantly affected by vision correction, whereas target recognition was less accurate when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze (p = 0.008), regardless of vision correction. Conclusions: Different presbyopic vision corrections alter eye and head movement patterns. The longer path length of eye and head movements and greater number of saccades associated with the spectacle presbyopic corrections may affect some aspects of driving performance.
Resumo:
In this study, the authors propose a novel video stabilisation algorithm for mobile platforms with moving objects in the scene. The quality of videos obtained from mobile platforms, such as unmanned airborne vehicles, suffers from jitter caused by several factors. In order to remove this undesired jitter, the accurate estimation of global motion is essential. However it is difficult to estimate global motions accurately from mobile platforms due to increased estimation errors and noises. Additionally, large moving objects in the video scenes contribute to the estimation errors. Currently, only very few motion estimation algorithms have been developed for video scenes collected from mobile platforms, and this paper shows that these algorithms fail when there are large moving objects in the scene. In this study, a theoretical proof is provided which demonstrates that the use of delta optical flow can improve the robustness of video stabilisation in the presence of large moving objects in the scene. The authors also propose to use sorted arrays of local motions and the selection of feature points to separate outliers from inliers. The proposed algorithm is tested over six video sequences, collected from one fixed platform, four mobile platforms and one synthetic video, of which three contain large moving objects. Experiments show our proposed algorithm performs well to all these video sequences.
Resumo:
This paper proposes a novel Hybrid Clustering approach for XML documents (HCX) that first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. The empirical analysis reveals that the proposed method is scalable and accurate.
Resumo:
Public transportation is an environment with great potential for applying location-based services through mobile devices. The BusTracker study is looking at how real-time passenger information systems can provide a core platform to improve commuters’ experiences. These systems rely on mobile computing and GPS technology to provide accurate information on transport vehicle locations. BusTracker builds on this mobile computing platform and geospatial information. The pilot study is running on the open source BugLabs computing platform, using a GPS module for accurate location information.