988 resultados para Pointing deviation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The aim of this study was to determine current approaches adopted by optometrists to the recording of corneal staining following fluorescein instillation. Methods: An anonymous ‘record-keeping task’ was sent to all 756 practitioners who are members of the Queensland Division of Optometrists Association Australia. This task comprised a form on which appeared a colour photograph depicting contact lens solution-induced corneal staining. Next to the photograph was an empty box, in which practitioners were asked to record their observations. Practitioners were also asked to indicate the level of severity of the condition at which treatment would be instigated. Results: Completed task forms were returned by 228 optometrists, representing a 30 per cent response rate. Ninety-two per cent of respondents offered a diagnosis. The most commonly used descriptive terms were ‘superficial punctate keratitis’ (36 per cent of respondents) and ‘punctate staining’ (29 per cent). The level of severity and location of corneal staining were noted by 69 and 68 per cent of respondents, respectively. A numerical grade was assigned by 44 per cent of respondents. Only three per cent nominated the grading scale used. The standard deviation of assigned grades was � 0.6. The condition was sketched by 35 per cent of respondents and two per cent stated that they would take a photograph of the eye. Ten per cent noted the eye in which the condition was being observed. Opinions of the level of severity at which treatment for corneal staining should be instigated varied considerably between practitioners, ranging from ‘any sign of corneal staining’ to ‘grade 4 staining’. Conclusion: Although most practitioners made a sensible note of the condition and properly recorded the location of corneal staining, serious deficiencies were evident regarding other aspects of record-keeping. Ongoing programs of professional optometric education should reinforce good practice in relation to clinical record-keeping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For several reasons, the Fourier phase domain is less favored than the magnitude domain in signal processing and modeling of speech. To correctly analyze the phase, several factors must be considered and compensated, including the effect of the step size, windowing function and other processing parameters. Building on a review of these factors, this paper investigates a spectral representation based on the Instantaneous Frequency Deviation, but in which the step size between processing frames is used in calculating phase changes, rather than the traditional single sample interval. Reflecting these longer intervals, the term delta-phase spectrum is used to distinguish this from instantaneous derivatives. Experiments show that mel-frequency cepstral coefficients features derived from the delta-phase spectrum (termed Mel-Frequency delta-phase features) can produce broadly similar performance to equivalent magnitude domain features for both voice activity detection and speaker recognition tasks. Further, it is shown that the fusion of the magnitude and phase representations yields performance benefits over either in isolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: It remains unclear whether it is possible to develop a spatiotemporal epidemic prediction model for cryptosporidiosis disease. This paper examined the impact of social economic and weather factors on cryptosporidiosis and explored the possibility of developing such a model using social economic and weather data in Queensland, Australia. ----- ----- Methods: Data on weather variables, notified cryptosporidiosis cases and social economic factors in Queensland were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Three-stage spatiotemporal classification and regression tree (CART) models were developed to examine the association between social economic and weather factors and monthly incidence of cryptosporidiosis in Queensland, Australia. The spatiotemporal CART model was used for predicting the outbreak of cryptosporidiosis in Queensland, Australia. ----- ----- Results: The results of the classification tree model (with incidence rates defined as binary presence/absence) showed that there was an 87% chance of an occurrence of cryptosporidiosis in a local government area (LGA) if the socio-economic index for the area (SEIFA) exceeded 1021, while the results of regression tree model (based on non-zero incidence rates) show when SEIFA was between 892 and 945, and temperature exceeded 32°C, the relative risk (RR) of cryptosporidiosis was 3.9 (mean morbidity: 390.6/100,000, standard deviation (SD): 310.5), compared to monthly average incidence of cryptosporidiosis. When SEIFA was less than 892 the RR of cryptosporidiosis was 4.3 (mean morbidity: 426.8/100,000, SD: 319.2). A prediction map for the cryptosporidiosis outbreak was made according to the outputs of spatiotemporal CART models. ----- ----- Conclusions: The results of this study suggest that spatiotemporal CART models based on social economic and weather variables can be used for predicting the outbreak of cryptosporidiosis in Queensland, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maps are used to represent three-dimensional space and are integral to a range of everyday experiences. They are increasingly used in mathematics, being prominent both in school curricula and as a form of assessing students understanding of mathematics ideas. In order to successfully interpret maps, students need to be able to understand that maps: represent space, have their own perspective and scale, and their own set of symbols and texts. Despite the fact that maps have an increased prevalence in society and school, there is evidence to suggest that students have difficulty interpreting maps. This study investigated 43 primary-aged students’ (aged 9-12 years) verbal and gestural behaviours as they engaged with and solved map tasks. Within a multiliteracies framework that focuses on spatial, visual, linguistic, and gestural elements, the study investigated how students interpret map tasks. Specifically, the study sought to understand students’ skills and approaches used to solving map tasks and the gestural behaviours they utilised as they engaged with map tasks. The investigation was undertaken using the Knowledge Discovery in Data (KDD) design. The design of this study capitalised on existing research data to carry out a more detailed analysis of students’ interpretation of map tasks. Video data from an existing data set was reorganised according to two distinct episodes—Task Solution and Task Explanation—and analysed within the multiliteracies framework. Content Analysis was used with these data and through anticipatory data reduction techniques, patterns of behaviour were identified in relation to each specific map task by looking at task solution, task correctness and gesture use. The findings of this study revealed that students had a relatively sound understanding of general mapping knowledge such as identifying landmarks, using keys, compass points and coordinates. However, their understanding of mathematical concepts pertinent to map tasks including location, direction, and movement were less developed. Successful students were able to interpret the map tasks and apply relevant mathematical understanding to navigate the spatial demands of the map tasks while the unsuccessful students were only able to interpret and understand basic map conventions. In terms of their gesture use, the more difficult the task, the more likely students were to exhibit gestural behaviours to solve the task. The most common form of gestural behaviour was deictic, that is a pointing gesture. Deictic gestures not only aided the students capacity to explain how they solved the map tasks but they were also a tool which assisted them to navigate and monitor their spatial movements when solving the tasks. There were a number of implications for theory, learning and teaching, and test and curriculum design arising from the study. From a theoretical perspective, the findings of the study suggest that gesturing is an important element of multimodal engagement in mapping tasks. In terms of teaching and learning, implications include the need for students to utilise gesturing techniques when first faced with new or novel map tasks. As students become more proficient in solving such tasks, they should be encouraged to move beyond a reliance on such gesture use in order to progress to more sophisticated understandings of map tasks. Additionally, teachers need to provide students with opportunities to interpret and attend to multiple modes of information when interpreting map tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To assemble expected values for free-living steps/day in special populations living with chronic illnesses and disabilities. Method Studies identified since 2000 were categorized into similar illnesses and disabilities, capturing the original reference, sample descriptions, descriptions of instruments used (i.e., pedometers, piezoelectric pedometers, accelerometers), number of days worn, and mean and standard deviation of steps/day. Results Sixty unique studies represented: 1) heart and vascular diseases, 2) chronic obstructive lung disease, 3) diabetes and dialysis, 4) breast cancer, 5) neuromuscular diseases, 6) arthritis, joint replacement, and fibromyalgia, 7) disability (including mental retardation/intellectual difficulties), and 8) other special populations. A median steps/day was calculated for each category. Waist-mounted and ankle-mounted instruments were considered separately due to fundamental differences in assessment properties. For waist-mounted instruments, the lowest median values for steps/day are found in disabled older adults (1214 steps/day) followed by people living with COPD (2237 steps/day). The highest values were seen in individuals with Type 1 diabetes (8008 steps/day), mental retardation/intellectual disability (7787 steps/day), and HIV (7545 steps/day). Conclusion This review will be useful to researchers/practitioners who work with individuals living with chronic illness and disability and require such information for surveillance, screening, intervention, and program evaluation purposes. Keywords: Exercise; Walking; Ambulatory monitoring

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bone loss may result from remodelling initiated by implant stress protection. Quantifying remodelling requires bone density distributions which can be obtained from computed tomography scans. Pre-operative scans of large animals however are rarely possible. This study aimed to determine if the contra-lateral bone is a suitable control for the purpose of quantifying bone remodelling. CT scans of 8 pairs of ovine tibia were used to determine the likeness of left and right bones. The deviation between the outer surfaces of the bone pairs was used to quantify geometric similarity. The density differences were determined by dividing the bones into discrete volumes along the shaft of the tibia. Density differences were also determined for fractured and contra-lateral bone pairs to determine the magnitude of implant related remodelling. Left and right ovine tibiae were found to have a high degree of similarity with differences of less than 1.0 mm in the outer surface deviation and density difference of less than 5% in over 90% of the shaft region. The density differences (10–40%) as a result of implant related bone remodelling were greater than left-right differences. Therefore, for the purpose of quantifying bone remodelling in sheep, the contra-lateral tibia may be considered an alternative to a pre-operative control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Variable Speed Limits (VSL) is a control tool of Intelligent Transportation Systems (ITS) which can enhance traffic safety and which has the potential to contribute to traffic efficiency. This study presents the results of a calibration and operational analysis of a candidate VSL algorithm for high flow conditions on an urban motorway of Queensland, Australia. The analysis was done using a framework consisting of a microscopic simulation model combined with runtime API and a proposed efficiency index. The operational analysis includes impacts on speed-flow curve, travel time, speed deviation, fuel consumption and emission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. The properties of the jet were examined far enough downstream to be relevant to the eventual modelling of the mixing problem. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm in a weak co-flow of 0.04 m/s. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller, which was placed in a glass-walled flume 0.4 m wide with a free surface depth of 0.15 m. The jet and scalar plume development were compared to that of a classical free round jet. Further, results pertaining to radial distribution, self similarity, standard deviation growth, maximum value decay and integral fluxes of velocity and concentration were presented and fitted with empirical correlations. Furthermore, propeller induced mixing and pollutant source concentration from a two-stroke engine were estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes that a survey of representations of men not-kissing men in recent television drama series makes clear a particularly hysterical fascination. While the Australian program GP has managed to produce a banal representation of two men kissing, American equivalents have resorted to a series of strategies which make insistently clear that not only can men not kiss-but that the act of not-kissing must be repeatedly displayed. By refusing to have lovers kiss; by having lovers kiss but refusing to show the act; by having gay lovers, but having one played by a woman; by having men kiss but rendering the act ridiculous; in these ways, American television programs make clear the importance of this act by consistently pointing towards it and declaring its impossibility. This paper calls for the justice of equal access to public images of kissing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the case of large cooperative communication systems where terminals use the protocol known as slotted amplify-and-forward protocol to aid the source in its transmission. Using the perturbation expansion methods of resolvents and large deviation techniques we obtain an expression for the Stieltjes transform of the asymptotic eigenvalue distribution of a sample covariance random matrix of the type HH† where H is the channel matrix of the transmission model for the transmission protocol we consider. We prove that the resulting expression is similar to the Stieltjes transform in its quadratic equation form for the Marcenko-Pastur distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.