993 resultados para 1995_03270825 TM-47 4501707


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Although the non-operative management of closed humeral midshaft fractures has been advocated for years, the increasing popularity of operative intervention has left the optimal treatment choice unclear. Objective To compare the outcomes of operative and non-operative treatment of traumatic closed humeral midshaft fractures in adult patients. Methods A multicentre prospective comparative cohort study across 20 centres was conducted. Patients with AO type 12 A2, A3 and B2 fractures were treated with a functional brace or a retrograde-inserted unreamed humeral nail. Follow-up measurements were taken at 6, 12 and 52 weeks after the injury. The primary outcome was fracture healing after 1 year. Secondary outcomes included sub-items of the Constant score, general patient satisfaction, complications and cost-effectiveness parameters. Functions of the uninjured extremity were used as reference parameters. Intention-to-treat analysis was applied with the use of t-tests, Fisher’s exact tests, Mann–Whitney U-tests and adjusted analysis of variance (ANOVA). Results Forty-seven patients were included. The patient sample consisted of 23 women and 24 men, with a mean age of 52.7 years (range 17–86 years). Of the 47 cases, 14 were treated non-operatively and 33 operatively. The follow-up rate at 1 year was 81%. After 1 year, 11 fractures (100%) healed in the non-operative group and at least 24 fractures (≥89%) healed in the operative group [1 non-union patient (4%) and no data for 2 patients (7%)]. There were no significant differences in pain, range of motion (ROM) of the shoulder and elbow, and return to work after 6 weeks, 12 weeks and 1 year. Although operatively treated patients showed significantly greater shoulder abduction strength (p = 0.036), elbow flexion strength (p = 0.021), functional hand positioning (p = 0.008) and return to recreational activities (p = 0.043) after 6 weeks, no statistically significant differences existed in any outcome measure at the 1-year follow-up. Conclusions Our findings indicate that the non-operative management of humeral midshaft fractures can be expected to have similar functional outcomes and patient satisfaction at 1 year, despite an early benefit to operative treatment. If no radiological evidence of fracture healing exists in non-operatively treated patients during early follow-up, a switch to surgical treatment results in good functional outcomes and patient satisfaction. Keywords: Humeral shaft fracture, Non-operative treatment, Functional brace, Operative treatment, Unreamed humeral nail (UHN), Prospective, Cohort study

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we evaluated agreement among three generations of ActiGraph (TM) accelerometers in children and adolescents. Twenty-nine participants (mean age = 14.2 +/- 3.0 years) completed two laboratory-based activity sessions, each lasting 60 min. During each session, participants concurrently wore three different models of the ActiGraph (TM) accelerometers (GT1M, GT3X, GT3X+). Agreement among the three models for vertical axis counts, vector magnitude counts, and time spent in moderate-to-vigorous physical exercise (MVPA) was evaluated by calculating intraclass correlation coefficients and Bland-Altman plots. The intraclass correlation coefficient for total vertical axis counts, total vector magnitude counts, and estimated MVPA was 0.994 (95% CI = 0.989-0.996), 0.981 (95% CI = 0.969-0.989), and 0.996 (95% CI = 0.989-0.998), respectively. Inter-monitor differences for total vertical axis and vector magnitude counts ranged from 0.3% to 1.5%, while inter-monitor differences for estimated MVPA were equal to or close to zero. On the basis of these findings, we conclude that there is strong agreement between the GT1M, GT3X, and GT3X+ activity monitors, thus making it acceptable for researchers and practitioners to use different ActiGraph (TM) models within a given study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Game playing contributes to the acquisition of required skills and competencies whilst supporting collaboration, communication and problem solving. This project introduced the board game Monopoly CityTM to tie theoretical class room learning with collaborative, play based problem solving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Occupational burnout and heath Occupational burnout is assumed to be a negative consequence of chronic work stress. In this study, it was explored in the framework of occupational health psychology, which focusses on psychologically mediated processes between work and health. The objectives were to examine the overlap between burnout and ill health in relation to mental disorders, musculoskeletal disorders, and cardiovascular diseases, which are the three commonest disease groups causing work disability in Finland; to study whether burnout can be distinguished from ill health by its relation to work characteristics and work disability; and to determine the socio-demographic correlates of burnout at the population level. A nationally representative sample of the Finnish working population aged 30 to 64 years (n = 3151-3424) from the multidisciplinary epidemiological Health 2000 Study was used. Burnout was measured with the Maslach Burnout Inventory - General Survey. The diagnoses of common mental disorders were based on the standardized mental health interview (the Composite International Diagnostic Interview), and physical illnesses were determined in a comprehensive clinical health examination by a research physician. Medically certified sickness absences exceeding 9 work days during a 2-year period were extracted from a register of The Social Insurance Institution of Finland. Work stress was operationalized according to the job strain model. Gender, age, education, occupational status, and marital status were recorded as socio-demographic factors. Occupational burnout was related to an increased prevalence of depressive and anxiety disorders and alcohol dependence among the men and women. Burnout was also related to musculoskeletal disorders among the women and cardiovascular diseases among the men independently of socio-demographic factors, physical strenuousness of work, health behaviour, and depressive symptoms. The odds of having at least one long, medically-certified sickness absence were higher for employees with burnout than for their colleagues without burnout. For severe burnout, this association was independent of co-occurring common mental disorders and physical illnesses for both genders, as was also the case for mild burnout among the women. In a subgroup of the men with absences, severe burnout was related to a greater number of absence days than among the women with absences. High job strain was associated with a higher occurrence of burnout and depressive disorders than low job strain was. Of these, the association between job strain and burnout was stronger, and it persisted after control for socio-demographic factors, health behaviour, physical illnesses, and various indicators of mental health. In contrast, job strain was not related to depressive disorders after burnout was accounted for. Among the working population over 30 years of age, burnout was positively associated with age. There was also a tendency towards higher levels of burnout among the women with low educational attainment and occupational status and among the unmarried men. In conclusion, a considerable overlap was found between burnout, mental disorders, and physical illnesses. Still, burnout did not seem to be totally redundant with respect to ill health. Burnout may be more strongly related to stressful work characteristics than depressive disorders are. In addition, burnout seems to be an independent risk factor for work disability, and it could possibly be used as a marker of health-impairing work stress. However, burnout may represent a different kind of risk factor for men and women, and this possibility needs to be taken into account in the promotion of occupational health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this study was to examine the constructions of the educable subject of the lifelong learning (LLL) narrative in the narrative life histories of adult students at general upper secondary school for adults (GUSSA). In this study lifelong learning has been defined as a cultural narrative on education, “a system of political thinking” that is not internally consistent, but has contradictory themes embedded within it (Billig et al., 1988). As earlier research has shown and this study also confirms, the LLL narrative creates differences between those who are included and those who fall behind and are excluded from the learning society ideal. Educability expresses socially constructed interpretations on who benefit from education and who should be educated and how. The presupposition in this study has been that contradictions between the LLL narrative and the so-called traditional constructions of educability are likely to be constructed as the former relies on the all-inclusive interpretation of educability and the latter on the meritocratic model of educating individuals based on their innate abilities. The school system continues to uphold the institutionalized ethos of educability that ranks students into the categories “bright”, “mediocre”, and “poor” (Räty & Snellman, 1998) on the basis of their abilities, including gender-related differences as well as differences based on social class. Traditional age-related norms also persist, for example general upper secondary education is normatively completed in youth and not in adulthood, and the formal learning context continues to outweigh both non-formal and informal learning. Moreover, in this study the construction of social differences in relation to educability and, thereafter unequal access to education has been examined in relation to age, social class, and gender. The biographical work of the research participants forms a peephole that permits the examination of the dilemmatic nature of the constructions of educability in this study. Formal general upper secondary education in adulthood is situated on the border between the traditional and the LLL narratives on educability: participation in GUSSA inevitably means that one’s ability and competence as a student and learner becomes reassessed through the assessment criteria maintained by schools, whereas according to the principles of LLL everyone is educable; everyone is encouraged to learn throughout their lives regardless of age, social class, or gender. This study is situated in the field of adult education, sociology of education, and social psychological research on educability, having also been informed by feminist studies. Moreover, this study contributes to narrative life history research combining the structural analysis of narratives (Labov & Waletzky, 1997), i.e. mini-stories within life history, with the analysis of the life histories as structural and thematic wholes and the creation of coherence in them; thus, permitting both micro and macro analyses. On accounting for the discontinuity created by participation in general upper secondary school study in adulthood and not normatively in youth, the GUSSA students construct coherence in relation to their ability and competence as students and learners. The seven case studies illuminate the social differences constructed in relation to educability, i.e. social class, gender, age, and the “new category of student and learner”. In the data of this study, i.e. 20 general upper secondary school adult graduates’ narrative life histories primarily generated through interviews, two main coherence patterns of the adult educable subject emerge. The first performance-oriented pattern displays qualities that are closely related to the principles of LLL. Contrary to the principles of lifewide learning, however, the documentation of one’s competence through formal qualifications outweighs non-formal and informal learning in preparation for future change and the competition for further education, professional careers, and higher social positions. The second flexible learning pattern calls into question the status of formal, especially theoretical and academically oriented education; inner development is seen as more important than such external signs of development — grades and certificates. Studying and learning is constructed as a hobby and as a means to a more satisfactory life as opposed to a socially and culturally valued serious occupation leading to further education and career development. Consequently, as a curious, active, and independent learner, this educable but not readily employable subject is pushed into the periphery of lifelong learning. These two coherence patterns of the adult educable subject illuminate who is to be educated and how. The educable and readily employable LLL subject is to participate in formal education in order to achieve qualifications for working life, whereas the educable but not employable subject may utilize lifewide learning for her/his own pleasure. Key words: adult education, general upper secondary school for adults, educability, lifelong learning, narrative life history

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this follow-up study is to analyse stages of learning and teaching of children with special needs in pre-school and the first two grades of elementary school. The target group included 270 children with special needs. The three year follow-up period for each child began during the pre-school year, and continued until the spring of the second grade in elementary school. Various diagnoses were detected among children in the study group. The disorders were categorised in six classes: the developmentally delayed, children with language development disorder, children with emotional and behavioural disorders, children with attention deficit, children with other non-cognitive disorders and children with extensive developmental disorders. The study's starting point was the situation in pre-school: how the children were placed in pre-school, and what kinds of support they were offered? The purpose of the study was to describe how children with special needs move from different types of groups in pre-school to the different types of classes in the first two grades of elementary school. I also examined how well the children with special needs succeeded in the first two grades of elementary school. An additional purpose was to find out what connections there may be between the paths taken by children with special needs when they move from pre-school to elementary school, the types of support they get, and how they succeed academically in elementary school. The data were gathered mainly by means of questionnaires. In addition the children were studied by means of tests designed to estimate their academic skills at the end of the second grade. In analysing the data I used both quantitative and qualitative methods. Six paths were identified among the children in the study group, based on whether a child was in a group or a class given special teaching or in an ordinary group or class during pre-school and the first two grades of elementary school. In this study, about 53% of the children with special needs moved from pre-school to a regular class in elementary school, and about 47% of the children received special education in elementary school. Among the ordinary groups (n = 69) in pre-school the majority of children (73 %) moved to a regular class in elementary school. Among the children receiving special education (n = 201) in pre-school, 46% moved to a regular class in elementary school. That path turned out to be the one followed by the greatest number of children. Only rarely did children move from an ordinary group in pre-school to a special education class in elementary school. Examination of the results according to the children's transition paths also links together with the viewpoint of integration and segregation. This study indicates that in pre-school special education groups, a significantly greater number of methods supporting children's development were used than in the conventional education groups. The difference was at its greatest inconnection with the use of so-called special rehabilitation methods. A quite wide range of variation was observed in how the children succeeded in elementary school. Success in the tests designed to estimate the children's academic skills was poor for 31% of the children (n = 230) in the first grade study group. For 69 % of the children, however, success in the tests was at least satisfactory. In the second grade study group 34 % of the children (N = 216) got through all the three tests estimating academic skills acceptably. According to this study, a number of children with special needs require special support throughout pre-school and the first two grades of elementary school. The results show that if the children received special support during the pre-school year, a number were able to participate in regular education in elementary school. Keywords: a child with special needs, measures of support, transitions, achievements in school

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The leading cause of death in the Western world continues to be coronary heart disease (CHD). At the root of the disease process is dyslipidemia an aberration in the relevant amounts of circulating blood lipids. Cholesterol builds up in the arterial wall and following rupture of these plaques, myocardial infarction or stroke can occur. Heart disease runs in families and a number of hereditary forms are known. The leading cause of adult dyslipidemia presently however is overweight and obesity. This thesis work presents an investigation of the molecular genetics of common, hereditary dyslipidemia and the tightly related condition of obesity. Familial combined hyperlipidemia (FCHL) is the most common hereditary dyslipidemia in man with an estimated population prevalence of 1-6%. This complex disease is characterized by elevated levels of serum total cholesterol, triglycerides or both and is observed in about 20% of individuals with premature CHD. Our group identified the disease to be associated with genetic variation in the USF1 transcription factor gene. USF1 has a key role in regulating other genes that control lipid and glucose metabolism as well as the inflammatory response all central processes in the progression of atherosclerosis and CHD. The first two works of this thesis aimed at understanding how these USF1 variants result in increased disease risk. Among the many, non-coding single-nucleotide polymorphisms (SNPs) that associated with the disease, one was found to have a functional effect. The risk-enhancing allele of this SNP seems to eradicate the ability of the important hormone insulin to induce the expression of USF1 in peripheral tissues. The resultant changes in the expression of numerous USF1 target genes over time probably enhance and accelerate the atherogenic processes. Dyslipidemias often represent an outcome of obesity and in the final work of this thesis we wanted to address the metabolic pathways related to acquired obesity. It is recognized that active processes in adipose tissue play an important role in the development of dyslipidemia, insulin resistance and other pathological conditions associated with obesity. To minimize the confounding effects of genetic differences present in most human studies, we investigated a rare collection of identical twins that differed significantly in the amount of body fat. In the obese, but otherwise healthy young adults, several notable changes were observed. In addition to chronic inflammation, the adipose tissue of the obese co-twins was characterized by a marked (47%) decrease in amount of mitochondrial DNA (mtDNA) a change associated with mitochondrial dysfunction. The catabolism of branched chain amino acids (BCAAs) was identified as the most down-regulated process in the obese co-twins. A concordant increase in the serum level of these insulin secretagogues was identified. This hyperaminoacidemia may provide the feed-back signal from insulin resistant adipose tissue to the pancreas to ensure an appropriately augmented secretory response. The down regulation of BCAA catabolism correlated closely with liver fat accumulation and insulin. The single most up-regulated gene (5.9 fold) in the obese co-twins was osteopontin (SPP1) a cytokine involved in macrophage recruitment to adipose tissue. SPP1 is here implicated as an important player in the development of insulin resistance. These studies of exceptional study samples provide better understanding of the underlying pathology in common dyslipidemias and other obesity associated diseases important for future improvement of intervention strategies and treatments to combat atherosclerosis and coronary heart disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tumor suppressor p53 represents a paradigm for gene regulation. Its rapid induction in response to DNA damage conditions has been attributed to both increased half-life of p53 protein and also increased translation of p53 mRNA. Recent advances in our understanding of the post-transcriptional regulation of p53 include the discovery of internal ribosome entry sites (IRESs) within the p53 mRNA. These IRES elements regulate the translation of the full length as well as the N-terminally truncated isoform, p53/47. The p53/47 isoform is generated by alternative initiation at an internal AUG codon present within the p53 ORF. The aim of this review is to summarize the role of translational control mechanisms in regulating p53 functions. We discuss here in detail how diverse cellular stress pathways trigger alterations in the cap-dependent and cap-independent translation of p53 mRNA and how changes in the relative expression levels of p53 isoforms result in more differentiated p53 activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phase diagrams for Tm2O3-H2O-CO2. Yb2O3-H2O-CO2 and Lu2O3-H2O-CO2 systems at 650 and 1300 bars have been investigated in the temperature range of 100–800°C. The phase diagrams are far more complex than those for the lighter lanthanides. The stable phases are Ln(OH)3, Ln2(CO3)3.3H2O (tengerite phase), orthorhombic-LnOHCO3, hexagonal-Ln2O2CO3. LnOOH and cubic-Ln2O3. Ln(OH)3 is stable only at very low partial pressures of CO2. Additional phases stabilised are Ln2O(OH)2CO3and Ln6(OH)4(CO3)7 which are absent in lighter lanthanide systems. Other phases, isolated in the presence of minor alkali impurities, are Ln6O2(OH)8(CO3)3. Ln4(OH)6(CO3)3 and Ln12O7(OH)10,(CO3)6. The chemical equilibria prevailing in these hydrothermal systems may be best explained on the basis of the four-fold classification of lanthanides.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Options for the integrated management of white blister (caused by Albugo candida) of Brassica crops include the use of well timed overhead irrigation, resistant cultivars, programs of weekly fungicide sprays or strategic fungicide applications based on the disease risk prediction model, Brassica(spot)(TM). Initial systematic surveys of radish producers near Melbourne, Victoria, indicated that crops irrigated overhead in the morning (0800-1200 h) had a lower incidence of white blister than those irrigated overhead in the evening (2000-2400 h). A field trial was conducted from July to November 2008 on a broccoli crop located west of Melbourne to determine the efficacy and economics of different practices used for white blister control, modifying irrigation timing, growing a resistant cultivar and timing spray applications based on Brassica(spot)(TM). Growing the resistant cultivar, 'Tyson', instead of the susceptible cultivar, 'Ironman', reduced disease incidence on broccoli heads by 99 %. Overhead irrigation at 0400 h instead of 2000 h reduced disease incidence by 58 %. A weekly spray program or a spray regime based on either of two versions of the Brassica(spot)(TM) model provided similar disease control and reduced disease incidence by 72 to 83 %. However, use of the Brassica(spot)(TM) models greatly reduced the number of sprays required for control from 14 to one or two. An economic analysis showed that growing the more resistant cultivar increased farm profit per ha by 12 %, choosing morning irrigation by 3 % and using the disease risk predictive models compared with weekly sprays by 15 %. The disease risk predictive models were 4 % more profitable than the unsprayed control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The outcome of the successfully resuscitated patient is mainly determined by the extent of hypoxic-ischemic cerebral injury, and hypothermia has multiple mechanisms of action in mitigating such injury. The present study was undertaken from 1997 to 2001 in Helsinki as a part of the European multicenter study Hypothermia after cardiac arrest (HACA) to test the neuroprotective effect of therapeutic hypothermia in patients resuscitated from out-of-hospital ventricular fibrillation (VF) cardiac arrest (CA). The aim of this substudy was to examine the neurological and cardiological outcome of these patients, and especially to study and develop methods for prediction of outcome in the hypothermia-treated patients. A total of 275 patients were randomized to the HACA trial in Europe. In Helsinki, 70 patients were enrolled in the study according to the inclusion criteria. Those randomized to hypothermia were actively cooled externally to a core temperature 33 ± 1ºC for 24 hours with a cooling device. Serum markers of ischemic neuronal injury, NSE and S-100B, were sampled at 24, 36, and 48 hours after CA. Somatosensory and brain stem auditory evoked potentials (SEPs and BAEPs) were recorded 24 to 28 hours after CA; 24-hour ambulatory electrocardiography recordings were performed three times during the first two weeks and arrhythmias and heart rate variability (HRV) were analyzed from the tapes. The clinical outcome was assessed 3 and 6 months after CA. Neuropsychological examinations were performed on the conscious survivors 3 months after the CA. Quantitative electroencephalography (Q-EEG) and auditory P300 event-related potentials were studied at the same time-point. Therapeutic hypothermia of 33ºC for 24 hours led to an increased chance of good neurological outcome and survival after out-of-hospital VF CA. In the HACA study, 55% of hypothermia-treated patients and 39% of normothermia-treated patients reached a good neurological outcome (p=0.009) at 6 months after CA. Use of therapeutic hypothermia was not associated with any increase in clinically significant arrhythmias. The levels of serum NSE, but not the levels of S-100B, were lower in hypothermia- than in normothermia-treated patients. A decrease in NSE values between 24 and 48 hours was associated with good outcome at 6 months after CA. Decreasing levels of serum NSE but not of S-100B over time may indicate selective attenuation of delayed neuronal death by therapeutic hypothermia, and the time-course of serum NSE between 24 and 48 hours after CA may help in clinical decision-making. In SEP recordings bilaterally absent N20 responses predicted permanent coma with a specificity of 100% in both treatment arms. Recording of BAEPs provided no additional benefit in outcome prediction. Preserved 24- to 48-hour HRV may be a predictor of favorable outcome in CA patients treated with hypothermia. At 3 months after CA, no differences appeared in any cognitive functions between the two groups: 67% of patients in the hypothermia and 44% patients in the normothermia group were cognitively intact or had only very mild impairment. No significant differences emerged in any of the Q-EEG parameters between the two groups. The amplitude of P300 potential was significantly higher in the hypothermia-treated group. These results give further support to the use of therapeutic hypothermia in patients with sudden out-of-hospital CA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pitfalls in the treatment of persons with dementia Persons with dementia require high-quality health care, rehabilitation and sufficient social services to support their autonomy and to postpone permanent institutionalization. This study sought to investigate possible pitfalls in the care of patients with dementia: hip fracture rehabilitation, use of inappropriate or antipsychotic medication, social and medicolegal services offered to dementia caregiving families. Three different Finnish samples were used from years 1999-2005, mean age 78 to 86 years. After hip fracture operation, the weight-bearing restriction especially in group of patients with dementia, was associated with a longer rehabilitation period (73.5 days vs. 45.5 days, p=0.03) and the inability to learn to walk after six weeks (p<0.001). Almost half (44%) of the pre-surgery home-dwellers with dementia in our sample required permanent hospitalization after hip fracture. Potentially inappropriate medication was used among 36.2% of nursing home and hospital patients. The most common PIDs in Finland were temazepam over 15 mg/day, oxybutynin, and dipyridamole. However, PID use failed to predict mortality or the use of health services. Nearly half (48.4%) of the nursing home and hospital patients with dementia used antipsychotic medication. The two-year mortality did not differ among the users of conventional or atypical antipsychotics or the non-users (45.3% vs.32.1% vs.49.6%, p=0.195). The mean number of hospital admissions was highest among non-users (p=0.029). A high number of medications (HR 1.12, p<0.001) and the use of physical restraints (HR 1.72, p=0.034) predicted higher mortality at two years, while the use of atypical antipsychotics (HR 0.49, p=0.047) showed a protective effect, if any. The services most often offered to caregiving families of persons with Alzheimer s disease (AD) included financial support from the community (36%), technical devices (33%), physiotherapy (32%), and respite care in nursing homes (31%). Those services most often needed included physiotherapy for the spouse with dementia (56%), financial support (50%), house cleaning (41%), and home respite (40%). Only a third of the caregivers were satisfied with these services, and 69% felt unable to influence the range of services offered. The use of legal guardians was quite rare (only 4.3%), while the use of financial powers of attorney was 37.8%. Almost half (47.9%) of the couples expressed an unmet need for discussion with their doctor about medico-legal issues, while only 9.9% stated that their doctor had informed them of such matters. Although we already have many practical methods to develop the medical and social care of persons with AD, these patients and their families require better planning and tailoring of such services. In this way, society could offer these elderly persons better quality of life while economizing on its financial resources. This study was supported by Social Insurance Institution of Finland and part of it made in cooperation with the The Central Union of the Welfare for the Aged, Finland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Klinefelter syndrome (KS) is the most frequent karyotype disorder of male reproductive function. Since its original clinical description in 1942 and the identification of its chromosomal basis 47,XXY in 1959, the typical KS phenotype has become well recognized, but the mechanisms behind the testicular degeneration process have remained unrevealed. This prospective study was undertaken to increase knowledge about testicular function in adolescent KS boys. It comprised a longitudinal follow-up of growth, pubertal development, and serum reproductive hormone levels in 14 prepubertal and pubertal KS boys. Each boy had a testicular biopsy that was analyzed with histomorphometric and immunohistochemical methods. The KS boys had sufficient testosterone levels to allow normal onset and progression of puberty. Their serum testosterone levels remained within the low-normal range throughout puberty, but from midpuberty onwards, findings like a leveling-off in testosterone and insulin-like factor 3 (INSL3) concentrations, high gonadotropin levels, and exaggerated responses to gonadotropin-releasing hormone stimulation suggest diminished testosterone secretion. We also showed that the Leydig cell differentiation marker INSL3 may serve as a novel marker for onset and normal progression of puberty in boys. In the KS boys the number of germ cells was already markedly lower at the onset of puberty. The pubertal activation of the pituitary-testicular axis accelerated germ cell depletion, and germ cell differentiation was at least partly blocked at the spermatogonium or early primary spermatocyte stages. The presence of germ cells correlated with serum reproductive hormone levels. The immature Sertoli cells were incapable of transforming to the adult type, and during puberty the degeneration of Sertoli cells increased markedly. The older KS boys displayed an evident Leydig cell hyperplasia, as well as fibrosis and hyalinization of the interstitium and peritubular connective tissue. Altered immunoexpression of the androgen receptor (AR) suggested that in KS boys during puberty a relative androgen deficiency develops at testicular level. The impact of genetic features of the supernumerary X chromosome on the KS phenotype was also studied. The present study suggests that parental origin of the supernumerary X chromosome and the length of the CAG repeat of the AR gene influence pubertal development and testicular degeneration. The current study characterized by several means the testicular degeneration process in the testes of adolescent KS boys and confirmed that this process accelerates at the onset of puberty. Although serum reproductive hormone levels indicated no hypogonadism during early puberty, the histological analyses showed an already markedly reduced fertility potential in prepubertal KS boys. Genetic features of the X chromosome affect the KS phenotype.