57 resultados para Institutional Evaluation Performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To compare the diagnostic performance of multi-detector CT arthrography (CTA) and 1.5-T MR arthrography (MRA) in detecting hyaline cartilage lesions of the shoulder, with arthroscopic correlation. PATIENTS AND METHODS: CTA and MRA prospectively obtained in 56 consecutive patients following the same arthrographic procedure were independently evaluated for glenohumeral cartilage lesions (modified Outerbridge grade ≥2 and grade 4) by two musculoskeletal radiologists. The cartilage surface was divided in 18 anatomical areas. Arthroscopy was taken as the reference standard. Diagnostic performance of CTA and MRA was compared using ROC analysis. Interobserver and intraobserver agreement was determined by κ statistics. RESULTS: Sensitivity and specificity of CTA varied from 46.4 to 82.4 % and from 89.0 to 95.9 % respectively; sensitivity and specificity of MRA varied from 31.9 to 66.2 % and from 91.1 to 97.5 % respectively. Diagnostic performance of CTA was statistically significantly better than MRA for both readers (all p ≤ 0.04). Interobserver agreement for the evaluation of cartilage lesions was substantial with CTA (κ = 0.63) and moderate with MRA (κ = 0.54). Intraobserver agreement was almost perfect with both CTA (κ = 0.94-0.95) and MRA (κ = 0.83-0.87). CONCLUSION: The diagnostic performance of CTA and MRA for the detection of glenohumeral cartilage lesions is moderate, although statistically significantly better with CTA. KEY POINTS: ? CTA has moderate diagnostic performance for detecting glenohumeral cartilage substance loss. ? MRA has moderate diagnostic performance for detecting glenohumeral cartilage substance loss. ? CTA is more accurate than MRA for detecting cartilage substance loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Variable definitions of outcome (Constant score, Simple Shoulder Test [SST]) have been used to assess outcome after shoulder treatment, although none has been accepted as the universal standard. Physicians lack an objective method to reliably assess the activity of their patients in dynamic conditions. Our purpose was to clinically validate the shoulder kinematic scores given by a portable movement analysis device, using the activities of daily living described in the SST as a reference. The secondary objective was to determine whether this device could be used to document the effectiveness of shoulder treatments (for glenohumeral osteoarthritis and rotator cuff disease) and detect early failures.Methods: A clinical trial including 34 patients and a control group of 31 subjects over an observation period of 1 year was set up. Evaluations were made at baseline and 3, 6, and 12 months after surgery by 2 independent observers. Miniature sensors (3-dimensional gyroscopes and accelerometers) allowed kinematic scores to be computed. They were compared with the regular outcome scores: SST; Disabilities of the Arm, Shoulder and Hand; American Shoulder and Elbow Surgeons; and Constant.Results: Good to excellent correlations (0.61-0.80) were found between kinematics and clinical scores. Significant differences were found at each follow-up in comparison with the baseline status for all the kinematic scores (P < .015). The kinematic scores were able to point out abnormal patient outcomes at the first postoperative follow-up.Conclusion: Kinematic scores add information to the regular outcome tools. They offer an effective way to measure the functional performance of patients with shoulder pathology and have the potential to detect early treatment failures.Level of evidence: Level II, Development of Diagnostic Criteria, Diagnostic Study. (C) 2011 Journal of Shoulder and Elbow Surgery Board of Trustees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a highly accurate approximation procedure for ruin probabilities in the classical collective risk model, which is based on a quadrature/rational approximation procedure proposed in [2]. For a certain class of claim size distributions (which contains the completely monotone distributions) we give a theoretical justification for the method. We also show that under weaker assumptions on the claim size distribution, the method may still perform reasonably well in some cases. This in particular provides an efficient alternative to a related method proposed in [3]. A number of numerical illustrations for the performance of this procedure is provided for both completely monotone and other types of random variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To retrospectively assess the frequency of adverse events related to percutaneous preoperative portal vein embolization (PPVE). MATERIALS AND METHODS: Institutional review board did not require its approval or patient informed consent for this study. The adverse events that occurred during PPVE or until planned hepatic surgery was performed or cancelled were retrospectively obtained from clinical, imaging, and laboratory data files in 188 patients (109 male and 79 female patients; mean age, 60 years; range, 16-78 years). Liver resection was planned for metastases (n = 137), hepatocarcinoma (n = 31), cholangiocarcinoma (n = 15), fibrolamellar hepatoma (n = 1), and benign disease (n = 4). PPVE was performed with a single-lumen 5-F catheter and a contralateral approach with n-butyl cyanoacrylate mixed with iodized oil as the main embolic agent. The rate of complications in patients with cirrhosis was compared with that in patients without cirrhosis by using the chi(2) test. RESULTS: Adverse events occurred in 24 (12.8%) of 188 patients, including 12 complications and 12 incidental imaging findings. Complications included thrombosis of the portal vein feeding the future remnant liver (n = 1); migration of emboli in the portal vein feeding the future remnant liver, which necessitated angioplasty (n = 2); hemoperitoneum (n = 1); rupture of a metastasis in the gallbladder (n = 1); transitory hemobilia (n = 1); and transient liver failure (n = 6). Incidental findings were migration of small emboli in nontargeted portal branches (n = 10) and subcapsular hematoma (n = 2). Among the 187 patients in whom PPVE was technically successful, there was a significant difference (P < .001) between the occurrence of liver failure after PPVE in patients with cirrhosis (five of 30) and those without (one of 157). Sixteen liver resections were cancelled due to cancer progression (n = 12), insufficient hypertrophy of the nonembolized liver (n = 3), and complete portal thrombosis (n = 1). CONCLUSION: PPVE is a safe adjuvant technique for hypertrophy of the initially insufficient liver reserve. Post-PPVE transient liver failure is more common in patients with cirrhosis than in those without cirrhosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Tracheal intubation may be more difficult in morbidly obese (MO) patients than in the non-obese. The aim of this study was to evaluate clinically if the use of the Video Intubation Unit (VIU), a video-optical intubation stylet, could improve the laryngoscopic view compared with the standard Macintosh laryngoscope in this specific population. METHODS: We studied 40 MO patients (body mass index >35 kg/m(2)) scheduled for bariatric surgery. Each patient had a conventional laryngoscopy and a VIU inspection. The laryngoscopic grades (LG) using the Cormack and Lehane scoring system were noted and compared. Thereafter, the patients were randomised to be intubated with one of the two techniques. In one group, the patients were intubated with the help of the VIU and in the control group, tracheal intubation was performed conventionally. The duration of intubation, as well as the minimal SpO(2) achieved during the procedure, were measured. RESULTS: Patient characteristics were similar in both groups. Seventeen patients had a direct LG of 2 or 3 (no patient had a grade of 4). Out of these 17 patients, the LG systematically improved with the VIU and always attained grade 1 (P<0.0001). The intubation time was shorter within the VIU group, but did not attain significance. There was no difference in the SpO(2) post-intubation. CONCLUSION: In MO patients, the use of the VIU significantly improves the visualisation of the larynx, thereby improving the intubation conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Video records are widely used to analyze performance in alpine skiing at professional or amateur level. Parts of these analyses require the labeling of some movements (i.e. determining when specific events occur). If differences among coaches and differences for the same coach between different dates are expected, they have never been quantified. Moreover, knowing these differences is essential to determine which parameters reliable should be used. This study aimed to quantify the precision and the repeatability for alpine skiing coaches of various levels, as it is done in other fields (Koo et al, 2005). METHODS: A software similar to commercialized products was designed to allow video analyses. 15 coaches divided into 3 groups (5 amateur coaches (G1), 5 professional instructors (G2) and 5 semi-professional coaches (G3)) were enrolled. They were asked to label 15 timing parameters (TP) according to the Swiss ski manual (Terribilini et al, 2001) for each curve. TP included phases (initiation, steering I-II), body and ski movements (e.g. rotation, weighting, extension, balance). Three video sequences sampled at 25 Hz were used and one curve per video was labeled. The first video was used to familiarize the analyzer to the software. The two other videos, corresponding to slalom and giant slalom, were considered for the analysis. G1 realized twice the analysis (A1 and A2) at different dates and TP were randomized between both analyses. Reference TP were considered as the median of G2 and G3 at A1. The precision was defined as the RMS difference between individual TP and reference TP, whereas the repeatability was calculated as the RMS difference between individual TP at A1 and at A2. RESULTS AND DISCUSSION: For G1, G2 and G3, a precision of +/-5.6 frames, +/-3.0 and +/-2.0 frames, was respectively obtained. These results showed that G2 was more precise than G1, and G3 more precise than G2, were in accordance with group levels. The repeatability for G1 was +/-3.1 frames. Furthermore, differences among TP precision were observed, considering G2 and G3, with largest differences of +/-5.9 frames for "body counter rotation movement in steering phase II", and of 0.8 frame for "ski unweighting in initiation phase". CONCLUSION: This study quantified coach ability to label video in term of precision and repeatability. The best precision was obtained for G3 and was of +/-0.08s, which corresponds to +/-6.5% of the curve cycle. Regarding the repeatability, we obtained a result of +/-0.12s for G1, corresponding to +/-12% of the curve cycle. The repeatability of G2 and G3 are expected to be lower than the precision of G1 and the corresponding repeatability will be assessed soon. In conclusion, our results indicate that the labeling of video records is reliable for some TP, whereas caution is required for others. REFERENCES Koo S, Gold MD, Andriacchi TP. (2005). Osteoarthritis, 13, 782-789. Terribilini M, et al. (2001). Swiss Ski manual, 29-46. IASS, Lucerne.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La syncope est un symptôme clinique fréquent mais son origine demeure indéterminée jusque dans 60% des cas de patients admis dans un centre d'urgences. Le développement de consultations spécialisées de la syncope a considérablement modifié l'évaluation des patients avec une syncope inexpliquée en les orientant vers des stratégies d'investigations non-invasives, tels que le tilt-test, le massage du sinus carotidien et le test ^hyperventilation. Cependant, il existe peu de données dans 10 la littérature concernant dans la performance diagnostique réelle de ces tests fonctionnels.Notre travail de recherche porte sur l'analyse des données des 939 premiers patients adressés à la consultation ambulatoire de la syncope du CHUV pour l'investigation d'une syncope d'origine indéterminée. L'objectif de notre travail de thèse est 1) d'évaluer la performance diagnostique de l'algorithme de prise en charge standardisé et de ses différents tests pratiqués dans le cadre de notre 15 consultation et 2) de déterminer les caractéristiques cliniques communes des patients avec un diagnostic final de syncope d'origine rythmique ou vaso-vagale.Notre travail de thèse démontre qu'un algorithme de prise en charge standardisé basé sur des tests non-invasifs permet de déterminer 2/3 des causes de syncope initialement d'origine indéterminée. Par ailleurs, notre travail montre que des étiologies bénignes, telles que la syncope d'origine vaso- 20 vagale ou psychogène, représentent la moitié des causes syncopales alors que les arythmies cardiaques demeurent peu fréquentes. Finalement, notre travail démontre que l'absence de symptomatologie prodromique, en particulier chez les patients âgés avec une limitation fonctionnelle ou un allongement de la durée de l'onde Ρ à l'électrocardiogramme, suggère une syncope d'origine rythmique. Ce travail de thèse contribuera à optimaliser notre algorithme de prise 25 en charge standardisée de la syncope d'origine indéterminée et ouvre de nouvelles perspectives de recherche dans le développement de modèles basés sur des facteurs cliniques permettant de prédire les principales causes syncopales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[Table des matières] Technology assessment in health care in the United States: an historical review / S. Perry. - The aims and methods of technology assessment / JH Glasser. - Evaluation des technologies de la santé / A. Griffiths. - Les données nécessaires pour l'évaluation des technologies médicales / R. Chrzanowski, F. Gutzwiller, F. Paccaud. - Economic issues in technology assessment/DR Lairson, JM Swint. - Two decades of experience in technology assessment: evaluating the safety, performance, and cost effectiveness of medical equipment / JJ Nobel. - Demography and technology assessment / H. Hansluwka. - Méthodes expérimentale et non expérimentale pour l'évaluation des innovations technologiques / R. Chrzanowski, F. Paccaud. - Skull radiography in head trauma: a successful case of technology assessment / NT Racoveanu. - Complications associées à l'anesthésie: une étude prospective en France / L. Tiret et al. - Impact de l'information publique sur les taux opératoires: le cas de l'hystérectomie / G. Domenighetti, P. Luraschi, A. Casabianca. - The clinical effectiveness of acupuncture for the relief of chronic pain / MS Patel, F. Gutzwiller, F. Paccaud, A. Marazzi. - Soins à domicile et hébergement à long terme: à la recherche d'un développement optimum / G. Tinturier. - Economic evaluation of six scenarios for the treatment of stones in the kidney and ureter by surgery or ESWL / MS Patel et al. - Technology assessment and medical practice / F. Gutzwiller. - Technology assessment and health policy / SJ Reiser. - Global programme on appropriate technology for health, its role and place within WHO / K. Staehr Johansen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to develop an ambulatory system for the three-dimensional (3D) knee kinematics evaluation, which can be used outside a laboratory during long-term monitoring. In order to show the efficacy of this ambulatory system, knee function was analysed using this system, after an anterior cruciate ligament (ACL) lesion, and after reconstructive surgery. The proposed system was composed of two 3D gyroscopes, fixed on the shank and on the thigh, and a portable data logger for signal recording. The measured parameters were the 3D mean range of motion (ROM) and the healthy knee was used as control. The precision of this system was first assessed using an ultrasound reference system. The repeatability was also estimated. A clinical study was then performed on five unilateral ACL-deficient men (range: 19-36 years) prior to, and a year after the surgery. The patients were evaluated with the IKDC score and the kinematics measurements were carried out on a 30 m walking trial. The precision in comparison with the reference system was 4.4 degrees , 2.7 degrees and 4.2 degrees for flexion-extension, internal-external rotation, and abduction-adduction, respectively. The repeatability of the results for the three directions was 0.8 degrees , 0.7 degrees and 1.8 degrees . The averaged ROM of the five patients' healthy knee were 70.1 degrees (standard deviation (SD) 5.8 degrees), 24.0 degrees (SD 3.0 degrees) and 12.0 degrees (SD 6.3 degrees for flexion-extension, internal-external rotation and abduction-adduction before surgery, and 76.5 degrees (SD 4.1 degrees), 21.7 degrees (SD 4.9 degrees) and 10.2 degrees (SD 4.6 degrees) 1 year following the reconstruction. The results for the pathologic knee were 64.5 degrees (SD 6.9 degrees), 20.6 degrees (SD 4.0 degrees) and 19.7 degrees (8.2 degrees) during the first evaluation, and 72.3 degrees (SD 2.4 degrees), 25.8 degrees (SD 6.4 degrees) and 12.4 degrees (SD 2.3 degrees) during the second one. The performance of the system enabled us to detect knee function modifications in the sagittal and transverse plane. Prior to the reconstruction, the ROM of the injured knee was lower in flexion-extension and internal-external rotation in comparison with the controlateral knee. One year after the surgery, four patients were classified normal (A) and one almost normal (B), according to the IKDC score, and changes in the kinematics of the five patients remained: lower flexion-extension ROM and higher internal-external rotation ROM in comparison with the controlateral knee. The 3D kinematics was changed after an ACL lesion and remained altered one year after the surgery