989 resultados para Mandible deviation
Resumo:
Near infrared spectroscopy (NIRS) combined with multivariate analysis techniques was applied to assess phenol content of European oak. NIRS data were firstly collected directly from solid heartwood surfaces: in doing so, the spectra were recorded separately from the longitudinal radial and the transverse section surfaces by diffuse reflectance. The spectral data were then pretreated by several pre-processing procedures, such as multiplicative scatter correction, first derivative, second derivative and standard normal variate. The tannin contents of sawmill collected from the longitudinal radial and transverse section surfaces were determined by quantitative extraction with water/methanol (1:4, by vol). Then, total phenol contents in tannin extracts were measured by the Folin-Ciocalteu method. The NIR data were correlated against the Folin-Ciocalteu results. Calibration models built with partial least squares regression displayed strong correlation - as expressed by high determination correlation coefficient (r2) and high ratio of performance to deviation (RPD) - between measured and predicted total phenols content, and weak calibration and prediction errors (RMSEC, RMSEP). The best calibration was provided with second derivative spectra (r2 value of 0.93 for the longitudinal radial plane and of 0.91 for the transverse section plane). This study illustrates that the NIRS technique when used in conjunction with multivariate analysis could provide reliable, quick and non-destructive assessment of European oak heartwood extractives.
Resumo:
From a study of 3 large half-sib families of cattle, we describe linkage between DNA polymorphisms on bovine chromosome 7 and meat tenderness. Quantitative trait loci (QTL) for Longissimus lumborum peak force (LLPF) and Semitendonosis adhesion (STADH) were located to this map of DNA markers, which includes the calpastatin ( CAST) and lysyl oxidase (LOX) genes. The LLPF QTL has a maximum lodscore of 4.9 and allele substitution of approximately 0.80 of a phenotypic standard deviation, and the peak is located over the CAST gene. The STADH QTL has a maximum lodscore of 3.5 and an allele substitution of approximately 0.37 of a phenotypic standard deviation, and the peak is located over the LOX gene. This suggests 2 separate likelihood peaks on the chromosome. Further analyses of meat tenderness measures in the Longissimus lumborum, LLPF and LL compression (LLC), in which outlier individuals or kill groups are removed, demonstrate large shifts in the location of LLPF QTL, as well as confirming that there are indeed 2 QTL on bovine chromosome 7. We found that both QTL are reflected in both LLPF and LLC measurements, suggesting that both these components of tenderness, myofibrillar and connective tissue, are detected by both measurements in this muscle.
Resumo:
Soils with high levels of chloride and/or sodium in their subsurface layers are often referred to as having subsoil constraints (SSCs). There is growing evidence that SSCs affect wheat yields by increasing the lower limit of a crop's available soil water (CLL) and thus reducing the soil's plant-available water capacity (PAWC). This proposal was tested by simulation of 33 farmers' paddocks in south-western Queensland and north-western New South Wales. The simulated results accounted for 79% of observed variation in grain yield, with a root mean squared deviation (RMSD) of 0.50 t/ha. This result was as close as any achieved from sites without SSCs, thus providing strong support for the proposed mechanism that SSCs affect wheat yields by increasing the CLL and thus reducing the soil's PAWC. In order to reduce the need to measure CLL of every paddock or management zone, two additional approaches to simulating the effects of SSCs were tested. In the first approach the CLL of soils was predicted from the 0.3-0.5 m soil layer, which was taken as the reference CLL of a soil regardless of its level of SSCs, while the CLL values of soil layers below 0.5 m depth were calculated as a function of these soils' 0.3-0.5 m CLL values as well as of soil depth plus one of the SSC indices EC, Cl, ESP, or Na. The best estimates of subsoil CLL values were obtained when the effects of SSCs were described by an ESP-dependent function. In the second approach, depth-dependent CLL values were also derived from the CLL values of the 0.3-0.5 m soil layer. However, instead of using SSC indices to further modify CLL, the default values of the water-extraction coefficient (kl) of each depth layer were modified as a function of the SSC indices. The strength of this approach was evaluated on the basis of correlation of observed and simulated grain yields. In this approach the best estimates were obtained when the default kl values were multiplied by a Cl-determined function. The kl approach was also evaluated with respect to simulated soil moisture at anthesis and at grain maturity. Results using this approach were highly correlated with soil moisture results obtained from simulations based on the measured CLL values. This research provides strong evidence that the effects of SSCs on wheat yields are accounted for by the effects of these constraints on wheat CLL values. The study also produced two satisfactory methods for simulating the effects of SSCs on CLL and on grain yield. While Cl and ESP proved to be effective indices of SSCs, EC was not effective due to the confounding effect of the presence of gypsum in some of these soils. This study provides the tools necessary for investigating the effects of SSCs on wheat crop yields and natural resource management (NRM) issues such as runoff, recharge, and nutrient loss through simulation studies. It also facilitates investigation of suggested agronomic adaptations to SSCs.
Resumo:
This study examines the leadership skills in municipal organisation. The study reflects the manager views on leadership skills required. The purpose of this study was to reflect the most important leadership skills currently and in the future as well as the control of these skills. The study also examines the importance of the change and development needs of the leadership skills. In addition, the effect of background variables on evaluation of leadership skills were also examined. The quantitative research method was used in the study. The material was collected with the structured questionnaire from 324 Kotka city managers. SPSS-program was used to analyse the study material. Factor analysis was used as the main method for analysis. In addition, mean and standard deviations were used to better reflect the study results. Based on the study results, the most important leadership skills currently and in the future are associated with internet skills, work control, problem solving and human resource management skills. Managers expected the importance of leadership skills to grow in the future. Main growth is associated with the software utilisation, language skills, communication skills as well as financial leadership skills. Strongest competence according to managers is associated with the internet skills. Managers also considered to control well the skills related to employee know-how and manager networking. In addition, significant development needs are required in leadership skills. Main improvement areas were discovered in software utilisation, work control, human resource management skills as well as skills requiring problem solving. It should be noted that the main improvement areas appeared in the leadership skills that were evaluated as most important apart from software utilisation. Position, municipal segments and sex were observed to explain most of the deviation in received responses.
Resumo:
Grass (monocots) and non-grass (dicots) proportions in ruminant diets are important nutritionally because the non-grasses are usually higher in nutritive value, particularly protein, than the grasses, especially in tropical pastures. For ruminants grazing tropical pastures where the grasses are C-4 species and most non-grasses are C-3 species, the ratio of C-13/C-12 in diet and faeces, measured as delta C-13 parts per thousand, is proportional to dietary non-grass%. This paper describes the development of a faecal near infrared (NIR) spectroscopy calibration equation for predicting faecal delta C-13 from which dietary grass and non-grass proportions can be calculated. Calibration development used cattle faeces derived from diets containing only C-3 non-grass and C-4 grass components, and a series of expansion and validation steps was employed to develop robustness and predictive reliability. The final calibration equation contained 1637 samples and faecal delta C-13 range (parts per thousand) of [12.27]-[27.65]. Calibration statistics were: standard error of calibration (SEC) of 0.78, standard error of cross-validation (SECV) of 0.80, standard deviation (SD) of reference values of 3.11 and R-2 of 0.94. Validation statistics for the final calibration equation applied to 60 samples were: standard error of prediction (SEP) of 0.87, bias of -0.15, R-2 of 0.92 and RPD of 3.16. The calibration equation was also tested on faeces from diets containing C-4 non-grass species or temperate C-3 grass species. Faecal delta C-13 predictions indicated that the spectral basis of the calibration was not related to C-13/C-12 ratios per se but to consistent differences between grasses and non-grasses in chemical composition and that the differences were modified by photosynthetic pathway. Thus, although the calibration equation could not be used to make valid faecal delta C-13 predictions when the diet contained either C-3 grass or C-4 non-grass, it could be used to make useful estimates of dietary non-grass proportions. It could also be ut :sed to make useful estimates of non-grass in mixed C-3 grass/non-grass diets by applying a modified formula to calculate non-grass from predicted faecal delta C-13. The development of a robust faecal-NIR calibration equation for estimating non-grass proportions in the diets of grazing cattle demonstrated a novel and useful application of NIR spectroscopy in agriculture.
Resumo:
The early detection of hearing deficits is important to a child's development. However, examining small children with behavioural methods is often difficult. Research with ERPs (event-related potentials), recorded with EEG (electroencephalography), does not require attention or action from the child. Especially in children's ERP research, it is essential that the duration of a recording session is not too long. A new, faster optimum paradigm has been developed to record MMN (mismatch negativity), where ERPs to several sound features can be recorded in one recording session. This substantially shortens the time required for the experiment. So far, the new paradigm has been used in adult and school-aged children research. This study examines if MMN, LDN (late discriminative negativity) and P3a components can be recorded in two-year-olds with the new paradigm. The standard stimulus (p=0.50) was an 80 dB harmonic tone consisting of three harmonic frequencies (500 Hz, 1000 Hz and 1500 Hz) with a duration of 200 ms. The loudness deviants (p=0.067) were at a level of +6 dB or -6 dB compared to the standards. The frequency deviants (p=0.112) had a fundamental frequency of 550 or 454.4 Hz (small deviation), 625 or 400 Hz (medium deviation) or 750 or 333.3 Hz (large deviation). The duration deviants (p=0.112) had a duration of 175 ms (small deviation), 150 ms (medium deviation) or 100 ms (large deviation). The direction deviants (p=0.067) were presented from the left or right loudspeaker only. The gap deviant (p=0.067) included a 5-ms silent gap in the middle of the sound. Altogether 17 children participated in the experiment, of whom the data of 12 children was used in the analysis. ERP components were observed for all deviant types. The MMN was significant for duration and gap deviants. The LDN was significant for the large duration deviant and all other deviant types. No significant P3a was observed. These results indicate that the optimum paradigm can be used with two-year-olds. With this paradigm, data on several sound features can be recorded in a shorter time than with the previous paradigms used in ERP research.
Resumo:
- Purpose To examine the change in corneal thickness and posterior curvature following 8 hours of miniscleral contact lens wear. - Methods Scheimpflug imaging (Pentacam HR, Oculus) was captured before, and immediately following, 8 hours of miniscleral contact lens wear for 15 young (mean age 22 ± 3 years), healthy participants with normal corneae. Natural diurnal variations were considered by measuring baseline corneal changes obtained on a separate control day without contact lens wear. - Results Over the central 6 mm of the cornea, a small, but highly statistically significant amount of edema was observed following 8 hours of miniscleral lens wear, after accounting for normal diurnal fluctuations (mean ± standard deviation percentage swelling 1.70 ± 0.98%, p < 0.0001). Posterior corneal topography remained stable following lens wear (-0.01 ± 0.07 mm steepening over the central 6 mm, p = 0.60). The magnitude of posterior corneal topographical changes following lens wear did not correlate with the extent of lens-related corneal edema (r = -0.16, p = 0.57). Similarly, the initial central corneal vault (maximum post-lens tear layer depth) was not associated with corneal swelling following lens removal (r = 0.27, p = 0.33). - Conclusions While a small amount of corneal swelling was induced following 8 hours of miniscleral lens wear (on average <2%), modern high Dk miniscleral contact lenses that vault the cornea do not induce clinically significant corneal edema or hypoxic related posterior corneal curvature changes during short-term wear. Longer-term studies of compromised eyes (e.g. corneal ectasia) are still required to inform the optimum lens and fitting characteristics for safe scleral lens wear to minimize corneal hypoxia.
Resumo:
1. Many organisms inhabit strongly fluctuating environments but their demography and population dynamics are often analysed using deterministic models and elasticity analysis, where elasticity is defined as the proportional change in population growth rate caused by a proportional change in a vital rate. Deterministic analyses may not necessarily be informative because large variation in a vital rate with a small deterministic elasticity may affect the population growth rate more than a small change in a less variable vital rate having high deterministic elasticity. 2. We analyse a stochastic environment model of the red kangaroo (Macropus rufus), a species inhabiting an environment characterized by unpredictable and highly variable rainfall, and calculate the elasticity of the stochastic growth rate with respect to the mean and variability in vital rates. 3. Juvenile survival is the most variable vital rate but a proportional change in the mean adult survival rate has a much stronger effect on the stochastic growth rate. 4. Even if changes in average rainfall have a larger impact on population growth rate, increased variability in rainfall may still be important also in long-lived species. The elasticity with respect to the standard deviation of rainfall is comparable to the mean elasticities of all vital rates but the survival in age class 3 because increased variation in rainfall affects both the mean and variability of vital rates. 5. Red kangaroos are harvested and, under the current rainfall pattern, an annual harvest fraction of c. 20% would yield a stochastic growth rate about unity. However, if average rainfall drops by more than c. 10%, any level of harvesting may be unsustainable, emphasizing the need for integrating climate change predictions in population management and increase our understanding of how environmental stochasticity translates into population growth rate.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
The aim of the present study was to assess oral health and treatment needs among adult Iranians according to socio-demographic status, smoking, and oral hygiene, and to investigate the relationships between these determinants and oral health. Data for 4448 young adult (aged 18) and 8301 middle-aged (aged 35 to 44) Iranians were collected in 2002 as part of a national survey using the World Health Organization (WHO) criteria for sampling and clinical diagnoses, across 28 provinces by 33 calibrated examiners. Gender, age, place of residence, and level of education served as socio-demographic information, smoking as behavioural and modified plaque index (PI) as the biological risk indicator for oral hygiene. Number of teeth, decayed teeth (DT), filled teeth (FT), decayed, missing, filled teeth (DMFT), community periodontal index (CPI), and prosthodontic rehabilitation served as outcome variables of oral health. Mean number of DMFT was 4.3 (Standard deviation (SD) = 3.7) in young adults and 11.0 (SD = 6.4) among middle-aged individuals. Among young adults the D-component (DT = 70%), and among middle-aged individuals the M-component (60%) dominated in the DMFT index. Among young adults, visible plaque was found in nearly all subjects. Maximum (max) PI was associated with higher mean number of DT, and higher periodontal treatment needs. A healthy periodontium was a rare condition, with 8% of young adults and 1% of middle-aged individuals having a max CPI = 0. The majority of the CPI findings among young adults consisted of calculus (48%) and deepened periodontal pockets (21%). Respective values for middle-aged individuals were 40% and 53%. Having a deep pocket (max CPI = 4) was more likely among young adults with a low level of education (Odds ratio (OR) = 2.7, 95% Confidence interval (CI) = 1.9–4.0) than it was among well-educated individuals. Among middle-aged individuals, having calculus or a periodontal pocket was more likely in men (OR = 1.8, 95% CI = 1.6–2.0) and in illiterate subjects (OR = 6.3, 95% CI = 5.1–7.8) than it was for their counterparts. Among young adults, having 28 teeth was more (p < 0.05) prevalent among men (72% vs. 68% for women), urban residents (71% vs. 67% for rural residents), and those with a high level of education (73% vs. 60% for those with a low level). Among middle-aged individuals, having a functional dentition was associated with younger age (OR = 2.0, 95% CI = 1.7−2.5) and higher level of education (OR = 1.8, 95% CI = 1.6−2.1). Of middle-aged individuals, 2% of 35- to 39-year-olds and 5% of those aged 40 to 44 were edentulous. Among the dentate subjects (n = 7,925), prosthodontic rehabilitation was more prevalent (p < 0.001) among women, urban residents, and those with a high level of education than it was among their counterparts. Among those having 1 to 19 teeth, a removable denture was the most common type of prosthodontic rehabilitation. Middle-aged individuals lacking a functional dentition were more likely (OR = 6.0, 95% CI = 4.8−7.6) to have prosthodontic rehabilitation than were those having a functional dentition. In total, 81% of all reported being non-smokers, and 32% of men and 5% of women were current smokers. Heavy smokers were the most likely to have deepened periodontal pockets (max CPI ≥ 3, OR = 2.9, 95% CI = 1.8−4.7) and to have less than 20 teeth (OR = 2.3, 95% CI = 1.5−3.6). The findings indicate impaired oral health status in adult Iranians, particularly those of low socio-economic status and educational level. The high prevalence of dental plaque and calculus and considerable unmet treatment needs call for a preventive population strategy with special emphasis on the improvement of oral self-care and smoking cessation to tackle the underlying risk factors for oral diseases in the Iranian adult population.
Resumo:
Class II division 1 malocclusion occurs in 3.5 to 13 percent of 7 12 year-old children. It is the most common reason for orthodontic treatment in Finland. Correction is most commonly performed using headgear treatment. The aim of this study was to investigate the effects of cervical headgear treatment on dentition, facial skeletal and soft tissue growth, and upper airway structure, in children. 65 schoolchildren, 36 boys and 29 girls were studied. At the onset of treatment a mean age was 9.3 (range 6.6 12.4) years. All the children were consequently referred to an orthodontist because of Class II division 1 malocclusion. The included children had protrusive maxilla and an overjet of more than 2mm (3 to 11 mm). The children were treated with a Kloehn-type cervical headgear as the only appliance until Class I first molar relationships were achieved. The essential features of the headgear were cervical strong pulling forces, a long upward bent outer bow, and an expanded inner bow. Dental casts and lateral and posteroanterior cephalograms were taken before and after the treatment. The results were compared to a historical, cross-sectional Finnish cohort or to historical, age- and sex-matched normal Class I controls. The Class I first molar relationships were achieved in all the treated children. The mean treatment time was 1.7 (range 0.3-3.1) years. Phase 2 treatments were needed in 52% of the children, most often because of excess overjet or overbite. The treatment decreased maxillary protrusion by inhibiting alveolar forward growth, while the rest of the maxilla and mandible followed normal growth. The palate rotated anteriorly downward. The expansion of the inner bow of the headgear induced widening of the maxilla, nasal cavity, and the upper and lower dental arches. Class II malocclusion was associated with narrower oro- and hypopharyngeal space than in the Class I normal controls. The treatment increased the retropalatal airway space, while the rest of the airway remained unaffected. The facial profile improved esthetically, while the facial convexity decreased. Facial soft tissues masked the facial skeletal convexity, and the soft tissue changes were smaller than skeletal changes. In conclusion, the headgear treatment with the expanded inner bow may be used as an easy and simple method for Class II correction in growing children.
Resumo:
C13HlsN205 S, M r = 314.35, orthorhombic, P212121 with a = 39.526 (4), b = 6.607 (2), c = 5.661 (2) A, Z = 4, V = 1478.36 A 3, D c = 1.412 Mg m -3, Cu Ka radiation. Final R = 0.073 for 1154 observed counter reflections. The sulphur atom is in a pseudo-equatorial position with respect to the dihydrouracil ring. The sugar pucker is predominantly O(l')-exo unlike the C(3')-exo,C(4')-endo observed for 2',3' O-isopropylideneuridine (ISPU). The fivemembered dioxolane ring has C(7) displaced by 0.497 (7)A from the best plane through atoms 0(2'), C(2'), C(3'), 0(3'), in contrast to ISPU where 0(3') shows the maximum deviation.
Resumo:
Interindividual variation in mean leukocyte telomere length (LTL) is associated with cancer and several age-associated diseases. We report here a genome-wide meta-analysis of 37,684 individuals with replication of selected variants in an additional 10,739 individuals. We identified seven loci, including five new loci, associated with mean LTL (P < 5 x 10(-8)). Five of the loci contain candidate genes (TERC, TERT, NAF1, OBFC1 and RTEL1) that are known to be involved in telomere biology. Lead SNPs at two loci (TERC and TERT) associate with several cancers and other diseases, including idiopathic pulmonary fibrosis. Moreover, a genetic risk score analysis combining lead variants at all 7 loci in 22,233 coronary artery disease cases and 64,762 controls showed an association of the alleles associated with shorter LTL with increased risk of coronary artery disease (21% (95% confidence interval, 5-35%) per standard deviation in LTL, P = 0.014). Our findings support a causal role of telomere-length variation in some age-related diseases.
Resumo:
In humans with a loss of uricase the final oxidation product of purine catabolism is uric acid (UA). The prevalence of hyperuricemia has been increasing around the world accompanied by a rapid increase in obesity and diabetes. Since hyperuricemia was first described as being associated with hyperglycemia and hypertension by Kylin in 1923, there has been a growing interest in the association between elevated UA and other metabolic abnormalities of hyperglycemia, abdominal obesity, dyslipidemia, and hypertension. The direction of causality between hyperuricemia and metabolic disorders, however, is unceartain. The association of UA with metabolic abnormalities still needs to be delineated in population samples. Our overall aims were to study the prevalence of hyperuricemia and the metabolic factors clustering with hyperuricemia, to explore the dynamical changes in blood UA levels with the deterioration in glucose metabolism and to estimate the predictive capability of UA in the development of diabetes. Four population-based surveys for diabetes and other non-communicable diseases were conducted in 1987, 1992, and 1998 in Mauritius, and in 2001-2002 in Qingdao, China. The Qingdao study comprised 1 288 Chinese men and 2 344 women between 20-74, and the Mauritius study consisted of 3 784 Mauritian Indian and Mauritian Creole men and 4 442 women between 25-74. In Mauritius, re-exams were made in 1992 and/or 1998 for 1 941 men (1 409 Indians and 532 Creoles) and 2 318 non pregnant women (1 645 Indians and 673 Creoles), free of diabetes, cardiovascular diseases, and gout at baseline examinations in 1987 or 1992, using the same study protocol. The questionnaire was designed to collect demographic details, physical examinations and standard 75g oral glucose tolerance tests were performed in all cohorts. Fasting blood UA and lipid profiles were also determined. The age-standardized prevalence in Chinese living in Qingdao was 25.3% for hyperuricemia (defined as fasting serum UA > 420 μmol/l in men and > 360 μmol/l in women) and 0.36% for gout in adults between 20-74. Hyperuricemia was more prevalent in men than in women. One standard deviation increase in UA concentration was associated with the clustering of metabolic risk factors for both men and women in three ethnic groups. Waist circumference, body mass index, and serum triglycerides appeared to be independently associated with hyperuricemia in both sexes and in all ethnic groups except in Chinese women, in whom triglycerides, high-density lipoprotein cholesterol, and total cholesterol were associated with hyperuricemia. Serum UA increased with increasing fasting plasma glucose levels up to a value of 7.0 mmol/l, but significantly decreased thereafter in mainland Chinese. An inverse relationship occurred between 2-h plasma glucose and serum UA when 2-h plasma glucose higher than 8.0 mmol/l. In the prospective study in Mauritius, 337 (17.4%) men and 379 (16.4%) women developed diabetes during the follow-up. Elevated UA levels at baseline increased 1.14-fold in risk of incident diabetes in Indian men and 1.37-fold in Creole men, but no significant risk was observed in women. In conclusion, the prevalence of hyperuricemia was high in Chinese in Qingdao, blood UA was associated with the clustering of metabolic risk factors in Mauritian Indian, Mauritian Creole, and Chinese living in Qingdao, and a high baseline UA level independently predicted the development of diabetes in Mauritian men. The clinical use of UA as a marker of hyperglycemia and other metabolic disorders needs to be further studied. Keywords: Uric acid, Hyperuricemia, Risk factors, Type 2 Diabetes, Incidence, Mauritius, Chinese
Resumo:
Objective: In Australian residential aged care facilities (RACFs), the use of certain classes of high-risk medication such as antipsychotics, potent analgesics, and sedatives is high. Here, we examined the prescribed medications and subsequent changes recommended by geriatricians during comprehensive geriatric consultations provided to residents of RACFs via videoconference. Design: This is a prospective observational study. Setting: Four RACFs in Queensland, Australia, are included. Participants: A total of 153 residents referred by general practitioners for comprehensive assessment by geriatricians delivered by video-consultation. Results: Residents’ mean (standard deviation, SD) age was 83.0 (8.1) years and 64.1% were female. They had multiple comorbidities (mean 6), high levels of dependency, and were prescribed a mean (SD) of 9.6 (4.2) regular medications. Ninety-one percent of patients were taking five or more medications daily. Of total medications prescribed (n=1,469), geriatricians recommended withdrawal of 9.8% (n=145) and dose alteration of 3.5% (n=51). New medications were initiated in 47.7% (n=73) patients. Of the 10.3% (n=151) medications considered as high risk, 17.2% were stopped and dose altered in 2.6%. Conclusion: There was a moderate prevalence of potentially inappropriate high-risk medications. However, geriatricians made relatively few changes, suggesting either that, on balance, prescription of these medications was appropriate or, because of other factors, there was a reluctance to adjust medications. A structured medication review using an algorithm for withdrawing medications of high disutility might help optimize medications in frail patients. Further research, including a broader survey, is required to understand these dynamics.