940 resultados para Roundness errors
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
Autonomous guidance of agricultural vehiclesis vital as mechanized farming production becomes more prevalent. It is crucial that tractor-trailers are guided with accuracy in both lateral and longitudinal directions, whilst being affected by large disturbance forces, or slips, owing to uncertain and undulating terrain. Successful research has been concentrated on trajectory control which can provide longitudinal and lateral accuracy if the vehicle moves without sliding, and the trailer is passive. In this paper, the problem of robust trajectory tracking along straight and circular paths of a tractor-steerable trailer is addressed. By utilizing a robust combination of backstepping and nonlinear PI control, a robust, nonlinear controller is proposed. For vehicles subjected to sliding, the proposed controller makes the lateral deviations and the orientation errors of the tractor and trailer converge to a neighborhood near the origin. Simulation results are presented to illustrate that the suggested controller ensures precise trajectory tracking in the presence of slip.
Resumo:
Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.
Resumo:
The objective quantification of three-dimensional kinematics during different functional and occupational tasks is now more in demand than ever. The introduction of new generation of low-cost passive motion capture systems from a number of manufacturers has made this technology accessible for teaching, clinical practice and in small/medium industry. Despite the attractive nature of these systems, their accuracy remains unproved in independent tests. We assessed static linear accuracy, dynamic linear accuracy and compared gait kinematics from a Vicon MX20 system to a Natural Point OptiTrack system. In all experiments data were sampled simultaneously. We identified both systems perform excellently in linear accuracy tests with absolute errors not exceeding 1%. In gait data there was again strong agreement between the two systems in sagittal and coronal plane kinematics. Transverse plane kinematics differed by up to 3 at the knee and hip, which we attributed to the impact of soft tissue artifact accelerations on the data. We suggest that low-cost systems are comparably accurate to their high-end competitors and offer a platform with accuracy acceptable in research for laboratories with a limited budget.
Resumo:
In 1999 Richards compared the accuracy of commercially available motion capture systems commonly used in biomechanics. Richards identified that in static tests the optical motion capture systems generally produced RMS errors of less than 1.0 mm. During dynamic tests, the RMS error increased to up to 4.2 mm in some systems. In the last 12 years motion capture systems have continued to evolve and now include high-resolution CCD or CMOS image sensors, wireless communication, and high full frame sampling frequencies. In addition to hardware advances, there have also been a number of advances in software, which includes improved calibration and tracking algorithms, real time data streaming, and the introduction of the c3d standard. These advances have allowed the system manufactures to maintain a high retail price in the name of advancement. In areas such as gait analysis and ergonomics many of the advanced features such as high resolution image sensors and high sampling frequencies are not required due to the nature of the task often investigated. Recently Natural Point introduced low cost cameras, which on face value appear to be suitable as at very least a high quality teaching tool in biomechanics and possibly even a research tool when coupled with the correct calibration and tracking software. The aim of the study was therefore to compare both the linear accuracy and quality of angular kinematics from a typical high end motion capture system and a low cost system during a simple task.
Resumo:
Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.
Resumo:
Purpose: To determine likely errors in estimating retinal shape using partial coherence interferometric instruments when no allowance is made for optical distortion. Method: Errors were estimated using Gullstrand’s No. 1 schematic eye and variants which included a 10 D axial myopic eye, an emmetropic eye with a gradient-index lens, and a 10.9 D accommodating eye with a gradient-index lens. Performance was simulated for two commercial instruments, the IOLMaster (Carl Zeiss Meditec) and the Lenstar LS 900 (Haag-Streit AG). The incident beam was directed towards either the centre of curvature of the anterior cornea (corneal-direction method) or the centre of the entrance pupil (pupil-direction method). Simple trigonometry was used with the corneal intercept and the incident beam angle to estimate retinal contour. Conics were fitted to the estimated contours. Results: The pupil-direction method gave estimates of retinal contour that were much too flat. The cornea-direction method gave similar results for IOLMaster and Lenstar approaches. The steepness of the retinal contour was slightly overestimated, the exact effects varying with the refractive error, gradient index and accommodation. Conclusion: These theoretical results suggest that, for field angles ≤30º, partial coherence interferometric instruments are of use in estimating retinal shape by the corneal-direction method with the assumptions of a regular retinal shape and no optical distortion. It may be possible to improve on these estimates out to larger field angles by using optical modeling to correct for distortion.
Resumo:
Honing and Ladinig (2008) make the assertion that while the internal validity of web-based studies may be reduced, this is offset by an increase in external validity possible when experimenters can sample a wider range of participants and experimental settings. In this paper, the issue of internal validity is more closely examined, and it is agued that there is no necessary reason why internal validity of a web-based study should be worse than that of a lab-based one. Errors of measurement or inconsistencies of manipulation will typically balance across conditions of the experiment, and thus need not necessarily threaten the validity of a study’s findings.
Resumo:
Contact lenses are a common method for the correction of refractive errors of the eye. While there have been significant advancements in contact lens designs and materials over the past few decades, the lenses still represent a foreign object in the ocular environment and may lead to physiological as well as mechanical effects on the eye. When contact lenses are placed in the eye, the ocular anatomical structures behind and in front of the lenses are directly affected. This thesis presents a series of experiments that investigate the mechanical and physiological effects of the short-term use of contact lenses on anterior and posterior corneal topography, corneal thickness, the eyelids, tarsal conjunctiva and tear film surface quality. The experimental paradigm used in these studies was a repeated measures, cross-over study design where subjects wore various types of contact lenses on different days and the lenses were varied in one or more key parameters (e.g. material or design). Both, old and newer lens materials were investigated, soft and rigid lenses were used, high and low oxygen permeability materials were tested, toric and spherical lens designs were examined, high and low powers and small and large diameter lenses were used in the studies. To establish the natural variability in the ocular measurements used in the studies, each experiment also contained at least one “baseline” day where an identical measurement protocol was followed, with no contact lenses worn. In this way, changes associated with contact lens wear were considered in relation to those changes that occurred naturally during the 8 hour period of the experiment. In the first study, the regional distribution and magnitude of change in corneal thickness and topography was investigated in the anterior and posterior cornea after short-term use of soft contact lenses in 12 young adults using the Pentacam. Four different types of contact lenses (Silicone hydrogel/ Spherical/–3D, Silicone Hydrogel/Spherical/–7D, Silicone Hydrogel/Toric/–3D and HEMA/Toric/–3D) of different materials, designs and powers were worn for 8 hours each, on 4 different days. The natural diurnal changes in corneal thickness and curvature were measured on two separate days before any contact lens wear. Significant diurnal changes in corneal thickness and curvature within the duration of the study were observed and these were taken into consideration for calculating the contact lens induced corneal changes. Corneal thickness changed significantly with lens wear and the greatest corneal swelling was seen with the hydrogel (HEMA) toric lens with a noticeable regional swelling of the cornea beneath the stabilization zones, the thickest regions of the lenses. The anterior corneal surface generally showed a slight flattening with lens wear. All contact lenses resulted in central posterior corneal steepening, which correlated with the relative degree of corneal swelling. The corneal swelling induced by the silicone hydrogel contact lenses was typically less than the natural diurnal thinning of the cornea over this same period (i.e. net thinning). This highlights why it is important to consider the natural diurnal variations in corneal thickness observed from morning to afternoon to accurately interpret contact lens induced corneal swelling. In the second experiment, the relative influence of lenses of different rigidity (polymethyl methacrylate – PMMA, rigid gas permeable – RGP and silicone hydrogel – SiHy) and diameters (9.5, 10.5 and 14.0) on corneal thickness, topography, refractive power and wavefront error were investigated. Four different types of contact lenses (PMMA/9.5, RGP/9.5, RGP/10.5, SiHy/14.0), were worn by 14 young healthy adults for a period of 8 hours on 4 different days. There was a clear association between fluorescein fitting pattern characteristics (i.e. regions of minimum clearance in the fluorescein pattern) and the resulting corneal shape changes. PMMA lenses resulted in significant corneal swelling (more in the centre than periphery) along with anterior corneal steepening and posterior flattening. RGP lenses, on the other hand, caused less corneal swelling (more in the periphery than centre) along with opposite effects on corneal curvature, anterior corneal flattening and posterior steepening. RGP lenses also resulted in a clinically and statistically significant decrease in corneal refractive power (ranging from 0.99 to 0.01 D), large enough to affect vision and require adjustment in the lens power. Wavefront analysis also showed a significant increase in higher order aberrations after PMMA lens wear, which may partly explain previous reports of "spectacle blur" following PMMA lens wear. We further explored corneal curvature, thickness and refractive changes with back surface toric and spherical RGP lenses in a group of 6 subjects with toric corneas. The lenses were worn for 8 hours and measurements were taken before and after lens wear, as in previous experiments. Both lens types caused anterior corneal flattening and a decrease in corneal refractive power but the changes were greater with the spherical lens. The spherical lens also caused a significant decrease in WTR astigmatism (WRT astigmatism defined as major axis within 30 degrees of horizontal). Both the lenses caused slight posterior corneal steepening and corneal swelling, with a greater effect in the periphery compared to the central cornea. Eyelid position, lid-wiper and tarsal conjunctival staining were also measured in Experiment 2 after short-term use of the rigid and SiHy contact lenses. Digital photos of the external eyes were captured for lid position analysis. The lid-wiper region of the marginal conjunctiva was stained using fluorescein and lissamine green dyes and digital photos were graded by an independent masked observer. A grading scale was developed in order to describe the tarsal conjunctival staining. A significant decrease in the palpebral aperture height (blepharoptosis) was found after wearing of PMMA/9.5 and RGP/10.5 lenses. All three rigid contact lenses caused a significant increase in lid-wiper and tarsal staining after 8 hours of lens wear. There was also a significant diurnal increase in tarsal staining, even without contact lens wear. These findings highlight the need for better contact lens edge design to minimise the interactions between the lid and contact lens edge during blinking and more lubricious contact lens surfaces to reduce ocular surface micro-trauma due to friction and for. Tear film surface quality (TFSQ) was measured using a high-speed videokeratoscopy technique in Experiment 2. TFSQ was worse with all the lenses compared to baseline (PMMA/9.5, RGP/9.5, RGP/10.5, and SiHy/14) in the afternoon (after 8 hours) during normal and suppressed blinking conditions. The reduction in TFSQ was similar with all the contact lenses used, irrespective of their material and diameter. An unusual pattern of change in TFSQ in suppressed blinking conditions was also found. The TFSQ with contact lens was found to decrease until a certain time after which it improved to a value even better than the bare eye. This is likely to be due to the tear film drying completely over the surface of the contact lenses. The findings of this study also show that there is still a scope for improvement in contact lens materials in terms of better wettability and hydrophilicity in order to improve TFSQ and patient comfort. These experiments showed that a variety of changes can occur in the anterior eye as a result of the short-term use of a range of commonly used contact lens types. The greatest corneal changes occurred with lenses manufactured from older HEMA and PMMA lens materials, whereas modern SiHy and rigid gas permeable materials caused more subtle changes in corneal shape and thickness. All lenses caused signs of micro-trauma to the eyelid wiper and palpebral conjunctiva, although rigid lenses appeared to cause more significant changes. Tear film surface quality was also significantly reduced with all types of contact lenses. These short-term changes in the anterior eye are potential markers for further long term changes and the relative differences between lens types that we have identified provide an indication of areas of contact lens design and manufacture that warrant further development.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
A new system is described for estimating volume from a series of multiplanar 2D ultrasound images. Ultrasound images are captured using a personal computer video digitizing card and an electromagnetic localization system is used to record the pose of the ultrasound images. The accuracy of the system was assessed by scanning four groups of ten cadaveric kidneys on four different ultrasound machines. Scan image planes were oriented either radially, in parallel or slanted at 30 C to the vertical. The cross-sectional images of the kidneys were traced using a mouse and the outline points transformed to 3D space using the Fastrak position and orientation data. Points on adjacent region of interest outlines were connected to form a triangle mesh and the volume of the kidneys estimated using the ellipsoid, planimetry, tetrahedral and ray tracing methods. There was little difference between the results for the different scan techniques or volume estimation algorithms, although, perhaps as expected, the ellipsoid results were the least precise. For radial scanning and ray tracing, the mean and standard deviation of the percentage errors for the four different machines were as follows: Hitachi EUB-240, −3.0 ± 2.7%; Tosbee RM3, −0.1 ± 2.3%; Hitachi EUB-415, 0.2 ± 2.3%; Acuson, 2.7 ± 2.3%.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.