40 resultados para multidisciplinary care
Resumo:
Much of what we know regarding the long-term course and outcome of major depressive disorder (MDD) is based on studies of mostly inpatient tertiary level cohorts and samples predating the era of the current antidepressants and the use of maintenance therapies. In addition, there is a lack of studies investigating the comprehensive significance of comorbid axis I and II disorders on the outcome of MDD. The present study forms a part of the Vantaa Depression Study (VDS), a regionally representative prospective and naturalistic cohort study of 269 secondary-level care psychiatric out- and inpatients (aged 20-59) with a new episode of DSM-IV MDD, and followed-up up to five years (n=182) with a life-chart and semistructured interviews. The aim was to investigate the long-term outcome of MDD and risk factors for poor recovery, recurrences, suicidal attempts and diagnostic switch to bipolar disorder, and the association of a family history of different psychiatric disorders on the outcome. The effects of comorbid disorders together with various other predictors from different domains on the outcome were comprehensively investigated. According to this study, the long-term outcome of MDD appears to be more variable when its outcome is investigated among modern, community-treated, secondary-care outpatients compared to previous mostly inpatient studies. MDD was also highly recurrent in these settings, but the recurrent episodes seemed shorter, and the outcome was unlikely to be uniformly chronic. Higher severity of MDD predicted significantly the number of recurrences and longer time spent ill. In addition, longer episode duration, comorbid dysthymic disorder, cluster C personality disorders and social phobia predicted a worse outcome. The incidence rate of suicide attempts varied robustly de¬pending on the level of depression, being 21-fold during major depressive episodes (MDEs), and 4-fold during partial remission compared to periods of full remission. Although a history of previous attempts and poor social support also indicated risk, time spent depressed was the central factor determining overall long-term risk. Switch to bipolar disorder occurred mainly to type II, earlier to type I, and more gradually over time to type II. Higher severity of MDD, comorbid social phobia, obsessive compulsive disorder, and cluster B personality disorder features predicted the diagnostic switch. The majority of patients were also likely to have positive family histories not exclusively of mood, but also of other mental disorders. Having a positive family history of severe mental disorders was likely to be clinically associated with a significantly more adverse outcome.
Resumo:
This study is part of an ongoing collaborative bipolar research project, the Jorvi Bipolar Study (JoBS). The JoBS is run by the Department of Mental Health and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry, Jorvi Hospital, Helsinki University Central Hospital (HUCH), Espoo, Finland. It is a prospective, naturalistic cohort study of secondary level care psychiatric in- and outpatients with a new episode of bipolar disorder (BD). The second report also included 269 major depressive disorder (MDD) patients from the Vantaa Depression Study (VDS). The VDS was carried out in collaboration with the Department of Psychiatry of the Peijas Medical Care District. Using the Mood Disorder Questionnaire (MDQ), all in- and outpatients at the Department of Psychiatry at Jorvi Hospital who currently had a possible new phase of DSM-IV BD were sought. Altogether, 1630 psychiatric patients were screened, and 490 were interviewed using a semistructured interview (SCID-I/P). The patients included in the cohort (n=191) had at intake a current phase of BD. The patients were evaluated at intake and at 6- and 18-month interviews. Based on this study, BD is poorly recognized even in psychiatric settings. Of the BD patients with acute worsening of illness, 39% had never been correctly diagnosed. The classic presentations of BD with hospitalizations, manic episodes, and psychotic symptoms lead clinicians to correct diagnosis of BD I in psychiatric care. Time of follow-up elapsed in psychiatric care, but none of the clinical features, seemed to explain correct diagnosis of BD II, suggesting reliance on cross- sectional presentation of illness. Even though BD II was clearly less often correctly diagnosed than BD I, few other differences between the two types of BD were detected. BD I and II patients appeared to differ little in terms of clinical picture or comorbidity, and the prevalence of psychiatric comorbidity was strongly related to the current illness phase in both types. At the same time, the difference in outcome was clear. BD II patients spent about 40% more time depressed than BD I patients. Patterns of psychiatric comorbidity of BD and MDD differed somewhat qualitatively. Overall, MDD patients were likely to have more anxiety disorders and cluster A personality disorders, and bipolar patients to have more cluster B personality disorders. The adverse consequences of missing or delayed diagnosis are potentially serious. Thus, these findings strongly support the value of screening for BD in psychiatric settings, especially among the major depressive patients. Nevertheless, the diagnosis must be based on a clinical interview and follow-up of mood. Comorbidity, present in 59% of bipolar patients in a current phase, needs concomitant evaluation, follow-up, and treatment. To improve outcome in BD, treatment of bipolar depression is a major challenge for clinicians.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
The adequacy of anesthesia has been studied since the introduction of balanced general anesthesia. Commercial monitors based on electroencephalographic (EEG) signal analysis have been available for monitoring the hypnotic component of anesthesia from the beginning of the 1990s. Monitors measuring the depth of anesthesia assess the cortical function of the brain, and have gained acceptance during surgical anesthesia with most of the anesthetic agents used. However, due to frequent artifacts, they are considered unsuitable for monitoring consciousness in intensive care patients. The assessment of analgesia is one of the cornerstones of general anesthesia. Prolonged surgical stress may lead to increased morbidity and delayed postoperative recovery. However, no validated monitoring method is currently available for evaluating analgesia during general anesthesia. Awareness during anesthesia is caused by an inadequate level of hypnosis. This rare but severe complication of general anesthesia may lead to marked emotional stress and possibly posttraumatic stress disorder. In the present series of studies, the incidence of awareness and recall during outpatient anesthesia was evaluated and compared with that of in inpatient anesthesia. A total of 1500 outpatients and 2343 inpatients underwent a structured interview. Clear intraoperative recollections were rare the incidence being 0.07% in outpatients and 0.13% in inpatients. No significant differences emerged between outpatients and inpatients. However, significantly smaller doses of sevoflurane were administered to outpatients with awareness than those without recollections (p<0.05). EEG artifacts in 16 brain-dead organ donors were evaluated during organ harvest surgery in a prospective, open, nonselective study. The source of the frontotemporal biosignals in brain-dead subjects was studied, and the resistance of bispectral index (BIS) and Entropy to the signal artifacts was compared. The hypothesis was that in brain-dead subjects, most of the biosignals recorded from the forehead would consist of artifacts. The original EEG was recorded and State Entropy (SE), Response Entropy (RE), and BIS were calculated and monitored during solid organ harvest. SE differed from zero (inactive EEG) in 28%, RE in 29%, and BIS in 68% of the total recording time (p<0.0001 for all). The median values during the operation were SE 0.0, RE 0.0, and BIS 3.0. In four of the 16 organ donors, EEG was not inactive, and unphysiologically distributed, nonreactive rhythmic theta activity was present in the original EEG signal. After the results from subjects with persistent residual EEG activity were excluded, SE, RE, and BIS differed from zero in 17%, 18%, and 62% of the recorded time, respectively (p<0.0001 for all). Due to various artifacts, the highest readings in all indices were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, electromyography (EMG), 50-Hz artifact, handling of the donor, ballistocardiography, and electrocardiography. In a prospective, randomized study of 26 patients, the ability of Surgical Stress Index (SSI) to differentiate patients with two clinically different analgesic levels during shoulder surgery was evaluated. SSI values were lower in patients with an interscalene brachial plexus block than in patients without an additional plexus block. In all patients, anesthesia was maintained with desflurane, the concentration of which was targeted to maintain SE at 50. Increased blood pressure or heart rate (HR), movement, and coughing were considered signs of intraoperative nociception and treated with alfentanil. Photoplethysmographic waveforms were collected from the contralateral arm to the operated side, and SSI was calculated offline. Two minutes after skin incision, SSI was not increased in the brachial plexus block group and was lower (38 ± 13) than in the control group (58 ± 13, p<0.005). Among the controls, one minute prior to alfentanil administration, SSI value was higher than during periods of adequate antinociception, 59 ± 11 vs. 39 ± 12 (p<0.01). The total cumulative need for alfentanil was higher in controls (2.7 ± 1.2 mg) than in the brachial plexus block group (1.6 ± 0.5 mg, p=0.008). Tetanic stimulation to the ulnar region of the hand increased SSI significantly only among patients with a brachial plexus block not covering the site of stimulation. Prognostic value of EEG-derived indices was evaluated and compared with Transcranial Doppler Ultrasonography (TCD), serum neuron-specific enolase (NSE) and S-100B after cardiac arrest. Thirty patients resuscitated from out-of-hospital arrest and treated with induced mild hypothermia for 24 h were included. Original EEG signal was recorded, and burst suppression ratio (BSR), RE, SE, and wavelet subband entropy (WSE) were calculated. Neurological outcome during the six-month period after arrest was assessed with the Glasgow-Pittsburgh Cerebral Performance Categories (CPC). Twenty patients had a CPC of 1-2, one patient had a CPC of 3, and nine patients died (CPC 5). BSR, RE, and SE differed between good (CPC 1-2) and poor (CPC 3-5) outcome groups (p=0.011, p=0.011, p=0.008, respectively) during the first 24 h after arrest. WSE was borderline higher in the good outcome group between 24 and 48 h after arrest (p=0.050). All patients with status epilepticus died, and their WSE values were lower (p=0.022). S-100B was lower in the good outcome group upon arrival at the intensive care unit (p=0.010). After hypothermia treatment, NSE and S-100B values were lower (p=0.002 for both) in the good outcome group. The pulsatile index was also lower in the good outcome group (p=0.004). In conclusion, the incidence of awareness in outpatient anesthesia did not differ from that in inpatient anesthesia. Outpatients are not at increased risk for intraoperative awareness relative to inpatients undergoing general anesthesia. SE, RE, and BIS showed non-zero values that normally indicate cortical neuronal function, but were in these subjects mostly due to artifacts after clinical brain death diagnosis. Entropy was more resistant to artifacts than BIS. During general anesthesia and surgery, SSI values were lower in patients with interscalene brachial plexus block covering the sites of nociceptive stimuli. In detecting nociceptive stimuli, SSI performed better than HR, blood pressure, or RE. BSR, RE, and SE differed between the good and poor neurological outcome groups during the first 24 h after cardiac arrest, and they may be an aid in differentiating patients with good neurological outcomes from those with poor outcomes after out-of-hospital cardiac arrest.
Resumo:
The Vantaa Primary Care Depression Study (PC-VDS) is a naturalistic and prospective cohort study concerning primary care patients with depressive disorders. It forms a collaborative research project between the Department of Mental and Alcohol Research of the National Public Health Institute, and the Primary Health Care Organization of the City of Vantaa. The aim is to obtain a comprehensive view on clinically significant depression in primary care, and to compare depressive patients in primary care and in secondary level psychiatric care in terms of clinical characteristics. Consecutive patients (N=1111) in three primary care health centres were screened for depression with the PRIME-MD, and positive cases interviewed by telephone. Cases with current depressive symptoms were diagnosed face-to-face with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I/P). A cohort of 137 patients with unipolar depressive disorders, comprising all patients with at least two depressive symptoms and clinically significant distress or disability, was recruited. The Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II), medical records, rating scales, interview and a retrospective life-chart were used to obtain comprehensive cross-sectional and retrospective longitudinal information. For investigation of suicidal behaviour the Scale for Suicidal Ideation (SSI), patient records and the interview were used. The methodology was designed to be comparable to The Vantaa Depression Study (VDS) conducted in secondary level psychiatric care. Comparison of major depressive disorder (MDD) patients aged 20-59 from primary care in PC-VDS (N=79) was conducted with new psychiatric outpatients (N =223) and inpatients (N =46) in VDS. The PC-VDS cohort was prospectively followed up at 3, 6 and 18 months. Altogether 123 patients (90%) completed the follow-up. Duration of the index episode and the timing of relapses or recurrences were examined using a life-chart. The retrospective investigation revealed current MDD in most (66%), and lifetime MDD in nearly all (90%) cases of clinically significant depressive syndromes. Two thirds of the “subsyndromal” cases had a history of major depressive episode (MDE), although they were currently either in partial remission or a potential prodromal phase. Recurrences and chronicity were common. The picture of depression was complicated by Axis I co-morbidity in 59%, Axis II in 52% and chronic Axis III disorders in 47%; only 12% had no co-morbidity. Within their lifetimes, one third (37%) had seriously considered suicide, and one sixth (17%) had attempted it. Suicidal behaviour clustered in patients with moderate to severe MDD, co-morbidity with personality disorders, and a history of treatment in psychiatric care. The majority had received treatment for depression, but suicidal ideation had mostly remained unrecognised. The comparison of patients with MDD in primary care to those in psychiatric care revealed that the majority of suicidal or psychotic patients were receiving psychiatric treatment, and the patients with the most severe symptoms and functional limitations were hospitalized. In other clinical aspects, patients with MDD in primary care were surprisingly similar to psychiatric outpatients. Mental health contacts earlier in the current MDE were common among primary care patients. The 18-month prospective investigation with a life-chart methodology verified the chronic and recurrent nature of depression in primary care. Only one-quarter of patients with MDD achieved and maintained full remission during the follow-up, while another quarter failed to remit at all. The remaining patients suffered either from residual symptoms or recurrences. While severity of depression was the strongest predictor of recovery, presence of co-morbid substance use disorders, chronic medical illness and cluster C personality disorders all contributed to an adverse outcome. In clinical decision making, beside severity of depression and co-morbidity, history of previous MDD should not be ignored by primary care doctors while depression there is usually severe enough to indicate at least follow-up, and concerning those with residual symptoms, evaluation of their current treatment. Moreover, recognition of suicidal behaviour among depressed patients should also be improved. In order to improve outcome of depression in primary care, the often chronic and recurrent nature of depression should be taken into account in organizing the care. According to literature management programs of a chronic disease, with enhancement of the role of case managers and greater integration of primary and specialist care, have been successful. Optimum ways of allocating resources between treatment providers as well as within health centres should be found.
Resumo:
Väitöskirjassa selvitettiin ikäihmisten laitoshoitoon siirtymisen todennäköisyyttä ja sen taustoja kansainvälisesti ainutlaatuisen rekisteriaineiston avulla. Selvitettäviä asioita olivat eri sairauksien, sosioekonomisten tekijöiden, puolison olemassaolon ja leskeksi jäämisen yhteys laitoshoitoon siirtymiseen yli 65-vuotiailla suomalaisilla. Tutkimuksessa havaittiin, että dementia, Parkinsonin tauti, aivohalvaus, masennusoireet ja muut mielenterveysongelmat, lonkkamurtuma sekä diabetes lisäsivät ikäihmisten todennäköisyyttä siirtyä laitoshoitoon yli 50 prosentilla, kun muut sairaudet ja sosiodemografiset tekijät oli otettu huomioon. Korkeat tulot vähensivät laitoshoidon todennäköisyyttä, kun taas puutteellinen asuminen (ilman peseytymistiloja tai keskus- tai sähkölämmitystä) sekä erittäin puutteellinen asuminen (ilman lämmintä vettä, vesijohtoa, viemäriä tai vesivessaa) lisäsivät todennäkösyyttä, kun muut sosiodemografiset tekijät, sairaudet ja asuinalue oli huomioitu. Kerrostalon hissittömyys ei ollut yhteydessä laitoshoidon todennäköisyyteen. Todennäköisyys siirtyä laitoshoitoon oli jostain syystä korkeampaa niillä ikäihmisillä, jotka asuivat vuokralla ja matalampaa omakotitalossa asuvilla ja niillä, joilla oli auto. Puolison olemassaolo vähensi ja leskeksi jääminen lisäsi laitoshoidon todennäköisyyttä huomattavasti. Todennäköisyys oli erityisen suuri, yli kolminkertainen, kun puolison kuolemasta oli kulunut enintään kuukausi verrattuna niihin, joiden puoliso oli elossa. Todennäköisyys laski, kun puolison kuolemasta kului aikaa. Miesten ja naisten tulokset olivat samansuuntaisia. Korkeat tulot tai koulutus eivät suojanneet riskiltä joutua laitoshoitoon puolison kuoltua. Puolison kuolema näyttää lisäävän hoidon tarvetta, kun kotona ei ole enää puolisoa tukemassa ja huolehtimassa kodin askareista. Laitoshoidon tarve vähenee, jos ja kun lesket ajan kuluessa oppivat elämään yksin. Toisaalta tutkimustulokset saattavat viitata myös siihen, että kaikkein huonokuntoisimmat lesket, jotka eivät pärjää yksin asuessaan, siirtyvät laitoshoitoon hyvin nopeasti puolison kuoltua. Tutkimuksessa oli mukana yhteensä yli 280 000 yli 65-vuotiasta henkilöä, joiden pitkäaikaiseen laitoshoitoon siirtymistä seurattiin tammikuusta 1998 syyskuuhun 2003. Laitoshoidoksi määriteltiin terveyskeskuksissa, sairaaloissa ja vanhainkodeissa tai vastaavissa yksiköissä tapahtuva pitkäaikainen hoito, joka kesti yli 90 vuorokautta tai oli vahvistettu pitkäaikaishoidon päätöksellä. Tutkimuksessa käytetty aineisto koottiin väestörekistereistä, sosiaali- ja terveydenhuollon rekistereistä ja lääkerekistereistä.
Resumo:
This study investigates the relationships between work stressors and organizational performance in terms of the quality of care provided by the long-term care facilities. Work stressors are first examined in relation to the unit's structural factors, resident characteristics, and to the unit specialization. The study is completed by an investigation into the associations of work stressors such as job demands or time pressure, role ambiguity, resident-related stress, and procedural injustice to organizational performance. Also the moderating effect of job control in the job demands organizational performance relationship is examined. The study was carried out in the National Research and Development Centre for Welfare and Health (STAKES). Survey data were drawn from 1194 nursing employees in 107 residential-home and health-center inpatient units in 1999 and from 977 employees in 91 units in 2002. Information on the unit resident characteristics and the quality of care was provided by the Resident Assessment Instrument (RAI). The results showed that large unit size or lower staffing levels were not consistently related to work stressors, whereas the impairments in residents' physical functioning in particular initiated stressful working conditions for employees. However, unit specialization into dementia and psychiatric residents was found to buffer the effects that the resident characteristics had on employee appraisals of work stressors, in that a high proportion of behavioral problems was related to less time pressure and role conflicts for employees in specialized units. Unit specialization was also related to improved team climates and the organizational commitment of employees. Work stressors associated with problems in care quality. Time pressure explained most of the differences between units in how the employees perceived the quality of physical and psychosocial care they provide for the residents. A high level of job demands in the unit was also found to be related to some increases in all clinical quality problems. High job control buffered the effects of job demands on the quality of care in terms of the use of restraints on elderly residents. Physical restraint and especially antipsychotic drug use were less prevalent in units that combined both high job demands and high control for employees. In contrast, in high strain units where heavy job demands coincided with a lack of control for employees, quality was poor in terms of the frequent use of physical restraints. In addition, procedural injustice was related to the frequent use of antianxiety of hypnotic drugs for elderly residents. The results suggest that both job control and procedural justice may have improved employees' abilities to cope when caring for the elderly residents, resulting in better organizational performance.
Resumo:
In line with cultural psychology and developmental theory, a single case approach is applied to construct knowledge on how children s interaction emerge interlinked to historical, social, cultural, and material context. The study focuses on the negotiation of constraints and meaning construction among 2-to 3-year-old children, a preschool teacher, and the researcher in settings with water. Water as an element offers a special case of cultural canalization: adults selectively monitor and guide children s access to it. The work follows the socio-cultural tradition in psychology, particularly the co-constructivist theory of human development and the Network of Meanings perspective developed at the University of São Paulo. Valsiner s concepts of Zone of Free Movement and Zone of Promoted Action are applied together with studies where interactions are seen as spaces of construction where negotiation of constraints for actions, emotions, and conceptions occur. The corpus was derived at a Finnish municipal day care centre. During a seven months period, children s actions were video recorded in small groups twice a month. The teacher and the researcher were present. Four sessions with two children were chosen for qualitative microanalysis; the analysis also addressed the transformations during the months covered by the study. Moreover, the data derivation was analyzed reflectively. The narrowed down arenas for actions were continuously negotiated among the participants both nonverbally and verbally. The adults expectations and intentions were materialized in the arrangements of the setting canalizing the possibilities for actions. Children s co-regulated actions emerged in relation to the adults presence, re-structuring attempts, and the constraints of the setting. Children co-constructed novel movements and meanings in relation to the initiatives and objects offered. Gestures, postures, and verbalizations emerged from the initially random movements and became constructed to have specific meanings and functions; meaning construction became abbreviated. The participants attempted to make sense of the ambiguous (explicit and implicit) intentions and fuzzy boundaries of promoted and possible actions: individualized yet overlapping features were continuously negotiated by all the participants. Throughout the months, children s actions increasingly corresponded adults (re-defined) conceptions of water researchers as an emerging group culture. Water became an instrument and a context for co-regulations. The study contributes to discussions on children as participants in cultural canalization and emphasizes the need for analysis in early childhood education practices on the implicit and explicit constraint structures for actions.
Resumo:
The resources of health systems are limited. There is a need for information concerning the performance of the health system for the purposes of decision-making. This study is about utilization of administrative registers in the context of health system performance evaluation. In order to address this issue, a multidisciplinary methodological framework for register-based data analysis is defined. Because the fixed structure of register-based data indirectly determines constraints on the theoretical constructs, it is essential to elaborate the whole analytic process with respect to the data. The fundamental methodological concepts and theories are synthesized into a data sensitive approach which helps to understand and overcome the problems that are likely to be encountered during a register-based data analyzing process. A pragmatically useful health system performance monitoring should produce valid information about the volume of the problems, about the use of services and about the effectiveness of provided services. A conceptual model for hip fracture performance assessment is constructed and the validity of Finnish registers as a data source for the purposes of performance assessment of hip fracture treatment is confirmed. Solutions to several pragmatic problems related to the development of a register-based hip fracture incidence surveillance system are proposed. The monitoring of effectiveness of treatment is shown to be possible in terms of care episodes. Finally, an example on the justification of a more detailed performance indicator to be used in the profiling of providers is given. In conclusion, it is possible to produce useful and valid information on health system performance by using Finnish register-based data. However, that seems to be far more complicated than is typically assumed. The perspectives given in this study introduce a necessary basis for further work and help in the routine implementation of a hip fracture monitoring system in Finland.