144 resultados para Dentofacial deviation
Resumo:
Variable Speed Limits (VSL) is a control tool of Intelligent Transportation Systems (ITS) which can enhance traffic safety and which has the potential to contribute to traffic efficiency. This study presents the results of a calibration and operational analysis of a candidate VSL algorithm for high flow conditions on an urban motorway of Queensland, Australia. The analysis was done using a framework consisting of a microscopic simulation model combined with runtime API and a proposed efficiency index. The operational analysis includes impacts on speed-flow curve, travel time, speed deviation, fuel consumption and emission.
Resumo:
Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. The properties of the jet were examined far enough downstream to be relevant to the eventual modelling of the mixing problem. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm in a weak co-flow of 0.04 m/s. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller, which was placed in a glass-walled flume 0.4 m wide with a free surface depth of 0.15 m. The jet and scalar plume development were compared to that of a classical free round jet. Further, results pertaining to radial distribution, self similarity, standard deviation growth, maximum value decay and integral fluxes of velocity and concentration were presented and fitted with empirical correlations. Furthermore, propeller induced mixing and pollutant source concentration from a two-stroke engine were estimated.
Resumo:
In this paper we consider the case of large cooperative communication systems where terminals use the protocol known as slotted amplify-and-forward protocol to aid the source in its transmission. Using the perturbation expansion methods of resolvents and large deviation techniques we obtain an expression for the Stieltjes transform of the asymptotic eigenvalue distribution of a sample covariance random matrix of the type HH† where H is the channel matrix of the transmission model for the transmission protocol we consider. We prove that the resulting expression is similar to the Stieltjes transform in its quadratic equation form for the Marcenko-Pastur distribution.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.
Resumo:
Background: The aims of this study were to determine the documentation of pharmacotherapy optimization goals in the discharge letters of patients with the principal diagnosis of chronic heart failure. Methods: A retrospective practice audit of 212 patients discharged to the care of their local general practitioner from general medical units of a large tertiary hospital. Details of recommendations regarding ongoing pharmacological and non-pharmacological management were reviewed. The doses of medications on discharge were noted and whether they met current guidelines recommending titration of angiotensin-converting enzyme inhibitors and beta-blockers. Ongoing arrangements for specialist follow up were also reviewed. Results: The mean age of patients whose letters were reviewed was 78.4 years (standard deviation ± 8.6); 50% were men. Patients had an overall median of six comorbidities and eight regular medications on discharge. Mean length of stay for each admission was 6 days. Discharge letters were posted a median of 4 days after discharge, with 25% not posted at 10 days. No discharge letter was sent in 9.4% (20) of the cases. Only six (2.8%) letters had any recommendations regarding future titration of angiotensin-converting enzyme inhibitors and 6.6% (14) for beta-blockers. Recommendations for future non-pharmacological management, for example, diuretic action plans, regular weight monitoring and exercise plans were not found in the letters in this audit. Conclusion: Hospital discharge is an opportunity to communicate management plans for treatment optimization effectively, and while this opportunity is spurned, implementation gaps in the management of cardiac failure will probably remain.
Resumo:
Background: Bioimpedance techniques provide a reliable method of assessing unilateral lymphedema in a clinical setting. Bioimpedance devices are traditionally used to assess body composition at a current frequency of 50 kHz. However, these devices are not transferable to the assessment of lymphedema, as the sensitivity of measuring the impedance of extracellular fluid is frequency dependent. It has previously been shown that the best frequency to detect extracellular fluid is 0 kHz (or DC). However, measurement at this frequency is not possible in practice due to the high skin impedance at DC, and an estimate is usually determined from low frequency measurements. This study investigated the efficacy of various low frequency ranges for the detection of lymphedema. Methods and Results: Limb impedance was measured at 256 frequencies between 3 kHz and 1000 kHz for a sample control population, arm lymphedema population, and leg lymphedema population. Limb impedance was measured using the ImpediMed SFB7 and ImpediMed L-Dex® U400 with equipotential electrode placement on the wrists and ankles. The contralateral limb impedance ratio for arms and legs was used to calculate a lymphedema index (L-Dex) at each measurement frequency. The standard deviation of the limb impedance ratio in a healthy control population has been shown to increase with frequency for both the arm and leg. Box and whisker plots of the spread of the control and lymphedema populations show that there exists good differentiation between the arm and leg L-Dex measured for lymphedema subjects and the arm and leg L-Dex measured for control subjects up to a frequency of about 30 kHz. Conclusions: It can be concluded that impedance measurements above a frequency of 30 kHz decrease sensitivity to extracellular fluid and are not reliable for early detection of lymphedema.
Resumo:
Objectives: The purpose of this study was to describe the use, as well as perceived effectiveness, of mainstream and complementary and alternative medicine (CAM) therapies in the treatment of lymphedema following breast or gynecological cancer. Further, the study assessed the relationship between the characteristics of lymphedema (including type, severity, stability, and duration), and the use of CAM and/or mainstream treatment. Methods: This was a cross-sectional study using a convenience sample of women with lymphedema following breast and gynecological cancers. A self-administered questionnaire was sent to 247 potentially eligible women. Of those returned (50%), 23 were ineligible and 6 were excluded due to level of missing data. Results: In the previous 12 months, the majority of women (90%) had used mainstream treatments to treat their lymphedema, with massage being the most commonly used (86%). One (1) in 2 women had used CAM to treat their lymphedema, and 98% of those using CAM were also using mainstream treatments. Over 27 types of CAM were reported, with use of a chi machine, vitamin E supplements, yoga, and meditation being the most commonly reported forms. The perceived effectiveness ratings (1–7 with 7 = completely effective) of mainstream(mean – standard deviation (SD): 5.3 – 1.5) and CAM therapies (mean – SD: 5.2 + 1.6) were considered high. Conclusions: These results demonstrate that mainstream and CAM treatment use is common, varied, and considered to be effective among women with lymphedema following breast or gynecological cancer. Furthermore, it highlights the immediate need for larger prospective studies assessing the inter-relationship between the use of mainstream and CAM therapies for treatment success.
Resumo:
There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.
Resumo:
Eccentric exercise is the conservative treatment of choice for mid-portion Achilles tendinopathy. While there is a growing body of evidence supporting the medium to long term efficacy of eccentric exercise in Achilles tendinopathy treatment, very few studies have investigated the short term response of the tendon to eccentric exercise. Moreover, the mechanisms through which tendinopathy symptom resolution occurs remain to be established. The primary purpose of this thesis was to investigate the acute adaptations of the Achilles tendon to, and the biomechanical characteristics of, the eccentric exercise protocol used for Achilles tendinopathy rehabilitation and a concentric equivalent. The research was conducted with an orientation towards exploring potential mechanisms through which eccentric exercise may bring about a resolution of tendinopathy symptoms. Specifically, the morphology of tendinopathic and normal Achilles tendons was monitored using high resolution sonography prior to and following eccentric and concentric exercise, to facilitate comparison between the treatment of choice and a similar alternative. To date, the only proposed mechanism through which eccentric exercise is thought to result in symptom resolution is the increased variability in motor output force observed during eccentric exercise. This thesis expanded upon prior work by investigating the variability in motor output force recorded during eccentric and concentric exercises, when performed at two different knee joint angles, by limbs with and without symptomatic tendinopathy. The methodological phase of the research focused on establishing the reliability of measures of tendon thickness, tendon echogenicity, electromyography (EMG) of the Triceps Surae and the standard deviation (SD) and power spectral density (PSD) of the vertical ground reaction force (VGRF). These analyses facilitated comparison between the error in the measurements and experimental differences identified as statistically significant, so that the importance and meaning of the experimental differences could be established. One potential limitation of monitoring the morphological response of the Achilles tendon to exercise loading is that the Achilles tendon is continually exposed to additional loading as participants complete the walking required to carry out their necessary daily tasks. The specific purpose of the last experiment in the methodological phase was to evaluate the effect of incidental walking activity on Achilles tendon morphology. The results of this study indicated that walking activity could decrease Achilles tendon thickness (negative diametral strain) and that the decrease in thickness was dependent on both the amount of walking completed and the proximity of walking activity to the sonographic examination. Thus, incidental walking activity was identified as a potentially confounding factor for future experiments which endeavoured to monitor changes in tendon thickness with exercise loading. In the experimental phase of this thesis the thickness of Achilles tendons was monitored prior to and following isolated eccentric and concentric exercise. The initial pilot study demonstrated that eccentric exercise resulted in a greater acute decrease in Achilles tendon thickness (greater diametral strain) compared to an equivalent concentric exercise, in participants with no history of Achilles tendon pain. This experiment was then expanded to incorporate participants with unilateral Achilles tendinopathy. The major finding of this experiment was that the acute decrease in Achilles tendon thickness observed following eccentric exercise was modified by the presence of tendinopathy, with a smaller decrease (less diametral strain) noted for tendinopathic compared to healthy control tendon. Based on in vitro evidence a decrease in tendon thickness is believed to reflect extrusion of fluid from the tendon with loading. This process would appear to be limited by the presence of pathology and is hypothesised to be a result of the changes in tendon structure associated with tendinopathy. Load induced fluid movement may be important to the maintenance of tendon homeostasis and structure as it has the potential to enhance molecular movement and stimulate tendon remodelling. On this basis eccentric exercise may be more beneficial to the tendon than concentric exercise. Finally, EMG and motor output force variability (SD and PSD of VGRF) were investigated while participants with and without tendinopathy performed the eccentric and concentric exercises. Although between condition differences were identified as statistically significant for a number of force variability parameters, the differences were not greater than the limits of agreement for repeated measures. Consequently the meaning and importance of these findings were questioned. Interestingly, the EMG amplitude of all three Triceps Surae muscles did not vary with knee joint angle during the performance of eccentric exercise. This raises questions pertaining to the functional importance of performing the eccentric exercise protocol at each of the two knee joint angles as it is currently prescribed. EMG amplitude was significantly greater during concentric compared to eccentric muscle actions. Differences in the muscle activation patterns may result in different stress distributions within the tendon and be related to the different diametral strain responses observed for eccentric and concentric muscle actions.
Resumo:
Objective To determine the test-retest reliability of measurements of thickness, fascicle length (Lf) and pennation angle (θ) of the vastus lateralis (VL) and gastrocnemius medialis (GM) muscles in older adults. Participants Twenty-one healthy older adults (11 men and ten women; average age 68·1 ± 5·2 years) participated in this study. Methods Ultrasound images (probe frequency 10 MHz) of the VL at two sites (VL site 1 and 2) were obtained with participants seated with knee at 90º flexion. For GM measures, participants lay prone with ankle fixed at 15º dorsiflexion. Measures were taken on two separate occasions, 7 days apart (T1 and T2). Results The ICCs (95% CI) were: VL site 1 thickness = 0·96(0·90–0·98); VL site 2 thickness = 0·96(0·90–0·98), VL θ = 0·87(0·68–0·95), VL Lf = 0·80(0·50–0·92), GM thickness = 0·97(0·92–0·99), GM θ = 0·85(0·62–0·94) and GM Lf =0·90(0·75–0·96). The 95% ratio limits of agreement (LOAs) for all measures, calculated by multiplying the standard deviation of the ratio of the results between T1 and T2 by 1·96, ranged from 10·59 to 38·01%. Conclusion The ability of these tests to determine a real change in VL and GM muscle architecture is good on a group level but problematic on an individual level as the relatively large 95% ratio LOAs in the current study may encompass the changes in architecture observed in other training studies. Therefore, the current findings suggest that B-mode ultrasonography can be used with confidence by researchers when investigating changes in muscle architecture in groups of older adults, but its use is limited in showing changes in individuals over time.
Resumo:
In 2003, the youth justice system in Scotland entered a new phase with the introduction of a pilot youth court. The processing of persistent 16 and 17 year old (and serious 15 year olds) represented a stark deviation from a ‘child centred’ and needs-oriented state apparatus for dealing with young offenders to one based on deeds and individual responsibility. This article, based on an evaluation funded by the Scottish Executive, is the first to provide a critical appraisal of this youth justice reform. It examines the views of the judiciary and young offenders and reveals that the pilot youth court in Scotland represents a punitive excursion that poses serious concerns for due process, human rights and net widening.
Resumo:
Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.
Resumo:
Purpose: James Clerk Maxwell is usually recognized as being the first, in 1854, to consider using inhomogeneous media in optical systems. However, some fifty years earlier Thomas Young, stimulated by his interest in the optics of the eye and accommodation, had already modeled some applications of gradient-index optics. These applications included using an axial gradient to provide spherical aberration-free optics and a spherical gradient to describe the optics of the atmosphere and the eye lens. We evaluated Young’s contributions. Method: We attempted to derive Young’s equations for axial and spherical refractive index gradients. Raytracing was used to confirm accuracy of formula. Results: We did not confirm Young’s equation for the axial gradient to provide aberration-free optics, but derived a slightly different equation. We confirmed the correctness of his equations for deviation of rays in a spherical gradient index and for the focal length of a lens with a nucleus of fixed index surrounded by a cortex of reducing index towards the edge. Young claimed that the equation for focal length applied to a lens with part of the constant index nucleus of the sphere removed, such that the loss of focal length was a quarter of the thickness removed, but this is not strictly correct. Conclusion: Young’s theoretical work in gradient-index optics received no acknowledgement from either his contemporaries or later authors. While his model of the eye lens is not an accurate physiological description of the human lens, with the index reducing least quickly at the edge, it represented a bold attempt to approximate the characteristics of the lens. Thomas Young’s work deserves wider recognition.
Resumo:
Purpose. To devise and validate artist-rendered grading scales for contact lens complications Methods. Each of eight tissue complications of contact lens wear (listed under 'Results') was painted by a skilled ophthalmic artist (Terry R. Tarrant) in five grades of severity: 0 (normal), 1 (trace), 2 (mild), 3 (moderate) and 4 (severe). A representative slit lamp photograph of a tissue response of each of the eight complications was shown to 404 contact lens practitioners who had never before used clinical grading scales. The practitioners were asked to grade each tissue response to the nearest 0.1 grade unit by interpolation. Results. The standard deviation (± s.d.) of the 404 responses for each tissue complication is tabulated below:_ing_ 0.5 Endothelial pplymegethisjij-4 0.7 Epithelial microcysts 0.5 Endothelial blebs_ 0.4 Stromal edema_onjunctiva! hyperemia 0.4 Stromal neovascularization 0.4 Papillary conjunctivitis 0.5 The frequency distributions and best-fit normal curves were also plotted. The precision of grading (s.d. x 2) ranged from 0.8 to 1.4, with a mean precision of 1.0. Conclusions. Grading scales afford contact lens practitioners with a method of quantifying the severity of adverse tissue responses to contact lens wear. It is noteworthy that the statistically verified precision of grading (1.0 scale unit) concurs precisely with the essential design feature of the grading scales that each grading step of 1.0 corresponds to clinically significant difference in severity. Thus, as a general rule, a difference or change in grade of > 1.0 can be taken to be both clinically and statistically significant when using these grading scales. Trained observers are likely to achieve even greater grading precision. Supported by Hydron Limited.