945 resultados para scoring system
Resumo:
OBJECTIVE To develop and evaluate a method for ultrasound-guidance in performing the proximal paravertebral block for flank anaesthesia in cattle through a cadaveric study, followed by clinical application. STUDY DESIGN prospective experimental cadaveric study and clinical series. ANIMALS Previously frozen lumbar sections of cows without known spinal abnormalities were used. The clinical case group comprised of ten animals for which a right flank laparotomy was indicated. METHODS Twenty cow cadavers were used to perform ultrasound-guided bilateral injections of 1.0 mL dye (1.0 mL 1% Toluidine Blue in 1% Borax) at the intervertebral foramen at the level of T13, L1 and L2 spinal nerves. Distance and depth of injection, staining of the dorsal and ventral nerve branches, and deviation from the target were evaluated. The investigator's confidence as to visualisation and expected success at staining the nerve was assessed. Ten clinical cases received the ultrasound-guided proximal paravertebral anaesthesia. Analgesic success was evaluated using a 4-grade scoring system at 10 minutes after the injection and during surgery, respectively. Categorical variables were described using frequencies and proportions. RESULTS Both dorsal and ventral branches of the spinal nerves T13, L1 or L2 were at least partially stained in 41% of injections, while in 77% of injections one of the branches was stained. Five out of ten clinical cases had a satisfactory anaesthesia. There was no significant association between confidence at injection and either staining or analgesic success. CONCLUSION Results from the cadaveric and clinical study suggest no significant improvement using ultrasound guidance to perform proximal paravertebral block in cows compared to our previous clinical experience and to references in the literature using the blind method. CLINICAL RELEVANCE Further research should be conducted to improve the ultrasound-guided technique described in this study.
Resumo:
BACKGROUND We report on the design and implementation of a study protocol entitled Acupuncture randomised trial for post anaesthetic recovery and postoperative pain - a pilot study (ACUARP) designed to investigate the effectiveness of acupuncture therapy performed in the perioperative period on post anaesthetic recovery and postoperative pain. METHODS/DESIGN The study is designed as a randomised controlled pilot trial with three arms and partial double blinding. We will compare (a) press needle acupuncture, (b) no treatment and (c) press plaster acupressure in a standardised anaesthetic setting. Seventy-five patients scheduled for laparoscopic surgery to the uterus or ovaries will be allocated randomly to one of the three trial arms. The total observation period will begin one day before surgery and end on the second postoperative day. Twelve press needles and press plasters are to be administered preoperatively at seven acupuncture points. The primary outcome measure will be time from extubation to 'ready for discharge' from the post anaesthesia care unit (in minutes). The 'ready for discharge' end point will be assessed using three different scores: the Aldrete score, the Post Anaesthetic Discharge Scoring System and an In-House score. Secondary outcome measures will comprise pre-, intra- and postoperative variables (which are anxiety, pain, nausea and vomiting, concomitant medication). DISCUSSION The results of this study will provide information on whether acupuncture may improve patient post anaesthetic recovery. Comparing acupuncture with acupressure will provide insight into potential therapeutic differences between invasive and non-invasive acupuncture techniques. TRIAL REGISTRATION NCT01816386 (First received: 28 October 2012).
Resumo:
Inflammatory bowel disease (IBD) is a common condition in dogs, and a dysregulated innate immunity is believed to play a major role in its pathogenesis. S100A12 is an endogenous damage-associated molecular pattern molecule, which is involved in phagocyte activation and is increased in serum/fecal samples from dogs with IBD. S100A12 binds to the receptor of advanced glycation end products (RAGE), a pattern-recognition receptor, and results of studies in human patients with IBD and other conditions suggest a role of RAGE in chronic inflammation. Soluble RAGE (sRAGE), a decoy receptor for inflammatory proteins (e.g., S100A12) that appears to function as an anti-inflammatory molecule, was shown to be decreased in human IBD patients. This study aimed to evaluate serum sRAGE and serum/fecal S100A12 concentrations in dogs with IBD. Serum and fecal samples were collected from 20 dogs with IBD before and after initiation of medical treatment and from 15 healthy control dogs. Serum sRAGE and serum and fecal S100A12 concentrations were measured by ELISA, and were compared between dogs with IBD and healthy controls, and between dogs with a positive outcome (i.e., clinical remission, n=13) and those that were euthanized (n=6). The relationship of serum sRAGE concentrations with clinical disease activity (using the CIBDAI scoring system), serum and fecal S100A12 concentrations, and histologic disease severity (using a 4-point semi-quantitative grading system) was tested. Serum sRAGE concentrations were significantly lower in dogs with IBD than in healthy controls (p=0.0003), but were not correlated with the severity of histologic lesions (p=0.4241), the CIBDAI score before (p=0.0967) or after treatment (p=0.1067), the serum S100A12 concentration before (p=0.9214) and after treatment (p=0.4411), or with the individual outcome (p=0.4066). Clinical remission and the change in serum sRAGE concentration after treatment were not significantly associated (p=0.5727); however, serum sRAGE concentrations increased only in IBD dogs with complete clinical remission. Also, dogs that were euthanized had significantly higher fecal S100A12 concentrations than dogs that were alive at the end of the study (p=0.0124). This study showed that serum sRAGE concentrations are decreased in dogs diagnosed with IBD compared to healthy dogs, suggesting that sRAGE/RAGE may be involved in the pathogenesis of canine IBD. Lack of correlation between sRAGE and S100A12 concentrations is consistent with sRAGE functioning as a non-specific decoy receptor. Further studies need to evaluate the gastrointestinal mucosal expression of RAGE in healthy and diseased dogs, and also the formation of S100A12-RAGE complexes.
Resumo:
Histomorphological features of colorectal cancers (CRC) represent valuable prognostic indicators for clinical decision making. The invasive margin is a central feature for prognostication shaped by the complex processes governing tumor-host interaction. Assessment of the tumor border can be performed on standard paraffin sections and shows promise for integration into the diagnostic routine of gastrointestinal pathology. In aggressive CRC, an extensive dissection of host tissue is seen with loss of a clear tumor-host interface. This pattern, termed "infiltrative tumor border configuration" has been consistently associated with poor survival outcome and early disease recurrence of CRC-patients. In addition, infiltrative tumor growth is frequently associated with presence of adverse clinicopathological features and molecular alterations related to aggressive tumor behavior including BRAFV600 mutation. In contrast, a well-demarcated "pushing" tumor border is seen frequently in CRC-cases with low risk for nodal and distant metastasis. A pushing border is a feature frequently associated with mismatch-repair deficiency and can be used to identify patients for molecular testing. Consequently, assessment of the tumor border configuration as an additional prognostic factor is recommended by the AJCC/UICC to aid the TNM-classification. To promote the assessment of the tumor border configuration in standard practice, consensus criteria on the defining features and method of assessment need to be developed further and tested for inter-observer reproducibility. The development of a standardized quantitative scoring system may lay the basis for verification of the prognostic associations of the tumor growth pattern in multivariate analyses and clinical trials. This article provides a comprehensive review of the diagnostic features, clinicopathological associations, and molecular alterations associated with the tumor border configuration in early stage and advanced CRC.
Resumo:
OBJECTIVE To evaluate the correlation between clinical measures of disease activity and a ultrasound (US) scoring system for synovitis applied by many different ultrasonographers in a daily routine care setting within the Swiss registry for RA (SCQM) and further to determine the sensitivity to change of this US Score. METHODS One hundred and eight Swiss rheumatologists were trained in performing the Swiss Sonography in Arthritis and Rheumatism (SONAR) score. US B-mode and Power Doppler (PwD) scores were correlated with DAS28 and compared between the clinical categories in a cross-sectional cohort of patients. In patients with a second US (longitudinal cohort), we investigated if change in US score correlated with change in DAS and evaluated the responsiveness of both methods. RESULTS In the cross-sectional cohort with 536 patients, correlation between the B-mode score and DAS28 was significant but modest (Pearson coefficient r = 0.41, P < 0.0001). The same was true for the PwD score (r = 0.41, P < 0.0001). In the longitudinal cohort with 183 patients we also found a significant correlation between change in B-mode and in PwD score with change in DAS28 (r = 0.54, P < 0.0001 and r = 0.46, P < 0.0001, respectively). Both methods of evaluation (DAS and US) showed similar responsiveness according to standardized response mean (SRM). CONCLUSIONS The SONAR Score is practicable and was applied by many rheumatologists in daily routine care after initial training. It demonstrates significant correlations with the degree of as well as change in disease activity as measured by DAS. On the level of the individual, the US score shows many discrepancies and overlapping results exist.
Resumo:
BACKGROUND Because computed tomography (CT) has advantages for visualizing the manifestation of necrosis and local complications, a series of scoring systems based on CT manifestations have been developed for assessing the clinical outcomes of acute pancreatitis (AP), including the CT severity index (CTSI), modified CTSI, etc. Despite the internationally accepted CTSI having been successfully used to predict the overall mortality and disease severity of AP, recent literature has revealed the limitations of the CTSI. Using the Delphi method, we establish a new scoring system based on retrocrural space involvement (RCSI), and compared its effectiveness at evaluating the mortality and severity of AP with that of the CTSI. METHODS We reviewed CT images of 257 patients with AP taken within 3-5 days of admission in 2012. The RCSI scoring system, which includes assessment of infectious conditions involving the retrocrural space and the adjacent pleural cavity, was established using the Delphi method. Two radiologists independently assessed the RCSI and CTSI scores. The predictive points of the RCSI and CTSI scoring systems in evaluating the mortality and severity of AP were estimated using receiver operating characteristic (ROC) curves. PRINCIPAL FINDINGS The RCSI score can accurately predict the mortality and disease severity. The area under the ROC curve for the RCSI versus CTSI score was 0.962±0.011 versus 0.900±0.021 for predicting the mortality, and 0.888±0.025 versus 0.904±0.020 for predicting the severity of AP. Applying ROC analysis to our data showed that a RCSI score of 4 was the best cutoff value, above which mortality could be identified. CONCLUSION The Delphi method was innovatively adopted to establish a scoring system to predict the clinical outcome of AP. The RCSI scoring system can predict the mortality of AP better than the CTSI system, and the severity of AP equally as well.
Resumo:
INTRODUCTION Dexmedetomidine was shown in two European randomized double-blind double-dummy trials (PRODEX and MIDEX) to be non-inferior to propofol and midazolam in maintaining target sedation levels in mechanically ventilated intensive care unit (ICU) patients. Additionally, dexmedetomidine shortened the time to extubation versus both standard sedatives, suggesting that it may reduce ICU resource needs and thus lower ICU costs. Considering resource utilization data from these two trials, we performed a secondary, cost-minimization analysis assessing the economics of dexmedetomidine versus standard care sedation. METHODS The total ICU costs associated with each study sedative were calculated on the basis of total study sedative consumption and the number of days patients remained intubated, required non-invasive ventilation, or required ICU care without mechanical ventilation. The daily unit costs for these three consecutive ICU periods were set to decline toward discharge, reflecting the observed reduction in mean daily Therapeutic Intervention Scoring System (TISS) points between the periods. A number of additional sensitivity analyses were performed, including one in which the total ICU costs were based on the cumulative sum of daily TISS points over the ICU period, and two further scenarios, with declining direct variable daily costs only. RESULTS Based on pooled data from both trials, sedation with dexmedetomidine resulted in lower total ICU costs than using the standard sedatives, with a difference of €2,656 in the median (interquartile range) total ICU costs-€11,864 (€7,070 to €23,457) versus €14,520 (€7,871 to €26,254)-and €1,649 in the mean total ICU costs. The median (mean) total ICU costs with dexmedetomidine compared with those of propofol or midazolam were €1,292 (€747) and €3,573 (€2,536) lower, respectively. The result was robust, indicating lower costs with dexmedetomidine in all sensitivity analyses, including those in which only direct variable ICU costs were considered. The likelihood of dexmedetomidine resulting in lower total ICU costs compared with pooled standard care was 91.0% (72.4% versus propofol and 98.0% versus midazolam). CONCLUSIONS From an economic point of view, dexmedetomidine appears to be a preferable option compared with standard sedatives for providing light to moderate ICU sedation exceeding 24 hours. The savings potential results primarily from shorter time to extubation. TRIAL REGISTRATION ClinicalTrials.gov NCT00479661 (PRODEX), NCT00481312 (MIDEX).
Resumo:
BACKGROUND Cam-type femoroacetabular impingement (FAI) resulting from an abnormal nonspherical femoral head shape leads to chondrolabral damage and is considered a cause of early osteoarthritis. A previously developed experimental ovine FAI model induces a cam-type impingement that results in localized chondrolabral damage, replicating the patterns found in the human hip. Biochemical MRI modalities such as T2 and T2* may allow for evaluation of the cartilage biochemistry long before cartilage loss occurs and, for that reason, may be a worthwhile avenue of inquiry. QUESTIONS/PURPOSES We asked: (1) Does the histological grading of degenerated cartilage correlate with T2 or T2* values in this ovine FAI model? (2) How accurately can zones of degenerated cartilage be predicted with T2 or T2* MRI in this model? METHODS A cam-type FAI was induced in eight Swiss alpine sheep by performing a closing wedge intertrochanteric varus osteotomy. After ambulation of 10 to 14 weeks, the sheep were euthanized and a 3-T MRI of the hip was performed. T2 and T2* values were measured at six locations on the acetabulum and compared with the histological damage pattern using the Mankin score. This is an established histological scoring system to quantify cartilage degeneration. Both T2 and T2* values are determined by cartilage water content and its collagen fiber network. Of those, the T2* mapping is a more modern sequence with technical advantages (eg, shorter acquisition time). Correlation of the Mankin score and the T2 and T2* values, respectively, was evaluated using the Spearman's rank correlation coefficient. We used a hierarchical cluster analysis to calculate the positive and negative predictive values of T2 and T2* to predict advanced cartilage degeneration (Mankin ≥ 3). RESULTS We found a negative correlation between the Mankin score and both the T2 (p < 0.001, r = -0.79) and T2* values (p < 0.001, r = -0.90). For the T2 MRI technique, we found a positive predictive value of 100% (95% confidence interval [CI], 79%-100%) and a negative predictive value of 84% (95% CI, 67%-95%). For the T2* technique, we found a positive predictive value of 100% (95% CI, 79%-100%) and a negative predictive value of 94% (95% CI, 79%-99%). CONCLUSIONS T2 and T2* MRI modalities can reliably detect early cartilage degeneration in the experimental ovine FAI model. CLINICAL RELEVANCE T2 and T2* MRI modalities have the potential to allow for monitoring the natural course of osteoarthrosis noninvasively and to evaluate the results of surgical treatments targeted to joint preservation.
Resumo:
BACKGROUND: This study focused on the descriptive analysis of cattle movements and farm-level parameters derived from cattle movements, which are considered to be generically suitable for risk-based surveillance systems in Switzerland for diseases where animal movements constitute an important risk pathway. METHODS: A framework was developed to select farms for surveillance based on a risk score summarizing 5 parameters. The proposed framework was validated using data from the bovine viral diarrhoea (BVD) surveillance programme in 2013. RESULTS: A cumulative score was calculated per farm, including the following parameters; the maximum monthly ingoing contact chain (in 2012), the average number of animals per incoming movement, use of mixed alpine pastures and the number of weeks in 2012 a farm had movements registered. The final score for the farm depended on the distribution of the parameters. Different cut offs; 50, 90, 95 and 99%, were explored. The final scores ranged between 0 and 5. Validation of the scores against results from the BVD surveillance programme 2013 gave promising results for setting the cut off for each of the five selected farm level criteria at the 50th percentile. Restricting testing to farms with a score ≥ 2 would have resulted in the same number of detected BVD positive farms as testing all farms, i.e., the outcome of the 2013 surveillance programme could have been reached with a smaller survey. CONCLUSIONS: The seasonality and time dependency of the activity of single farms in the networks requires a careful assessment of the actual time period included to determine farm level criteria. However, selecting farms in the sample for risk-based surveillance can be optimized with the proposed scoring system. The system was validated using data from the BVD eradication program. The proposed method is a promising framework for the selection of farms according to the risk of infection based on animal movements.
Resumo:
The main objective of this study was to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that can be translated into a simple scoring system in order to ascertain stroke cases using hospital admission medical records data. This algorithm, the Risk Index Score (RISc), was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christ (BASIC) project. The validity of the RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment accomplished by physician review of hospital admission records. The goal of this study was to develop a rapid, simple, efficient, and accurate method to ascertain the incidence of stroke from routine hospital admission hospital admission records for epidemiologic investigations. ^ The main objectives of this study were to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that could be translated into a simple scoring system to ascertain stroke cases using hospital admission medical records data. (Abstract shortened by UMI.)^
Resumo:
Background. Racial disparities in healthcare span such areas as access, outcomes after procedures, and patient satisfaction. Previous work suggested that minorities experience less healthcare and worse survival rates. In adult orthotopic liver transplantation (OLT) mixed results have been reported, with some showing African-American recipients having poor survival compared to Caucasians, and others finding no such discrepancy. ^ Purpose. This study’s purpose was to analyze the most recent United Network for Organ Sharing (UNOS) data, both before and after the implementation of the Model for End-Stage Liver Disease (MELD)/Pediatric End-Stage Liver Disease (PELD) scoring system, to determine if minority racial groups still experience poor outcomes after OLT. ^ Methods. The UNOS dataset for 1992-2001 (Era I) and 2002-2007 (Era II) was used. Patient survival rates for each Era and for adult and pediatric recipients were analyzed with adjustment. A separate multivariate analysis was performed on African-American adult patients in Era II in order to identify unique predictors for poor patient survival. ^ Results. The overall study included 66,118 OLT recipients. The majority were Caucasian (78%), followed by Hispanics (13%) and African-Americans (9%). Hispanic and African-American adults were more likely to be female, have Hepatitis C, to be in the intensive care unit (ICU) or ventilated at time of OLT, to have a MELD score ≥23, to have a lower education level, and to have public insurance when compared to Caucasian adults (all p-values < 0.05). Hispanic and African-American pediatric recipients were more likely have public insurance and less likely to receive a living donor OLT than were Caucasian pediatric OLT recipients (p <0.05). There was no difference in the likelihood of having a PELD score ≥21 among racial groups (p >0.40). African-American adults in Era I and Era II had worse patient survival rates than both Caucasians and Hispanic (pair-wise p-values <0.05). This same disparity was seen for pediatric recipients in Era I, but not in Era II. Multivariate analysis of African-American recipients revealed no unique predictors of patient death. ^ Conclusions. African-American race is still a predictor of poor outcome after adult OLT, even after adjustment for multiple clinical, demographic, and liver disease severity variables. Although African-American and Hispanic subgroups share many characteristics previously thought to increase risk of post-OLT death, only African-American patients have poor survival rates when compared to Caucasians. ^
Resumo:
Background. Necrotizing pneumonia is generally considered a rare complication of pneumococcal pneumonia in adults. We systematically studied the incidence of necrotizing changes in adult patients with pneumococcal pneumonia, and examined the severity of infection, the role of causative serotype and the association with bacteremia. ^ Methods. We used a data base of all pneumococcal infections identified at our medical center between 2000 and 2010. Original readings of chest X-rays (CXR) and computerized tomography (CT) were noted. All images were then reread independently by 2 radiologists. The severity of disease was assessed using the SMART-COP scoring system. ^ Results. There were 351 cases of pneumococcal pneumonia. Necrosis was reported in no original CXR readings and 6 of 136 (4.4%) CTs. With re-reading, 8 of 351 (2.3%) CXR and 15 of 136 (11.0%) CT had necrotizing changes. Overall, these changes were found in 23 of 351 (6.6%, 95% CI 4.0 - 9.1) patients. The incidence of bacteremia and the admitting SMART-COP scores were similar in patients with and without necrosis (P=1.00 and P=0.32, respectively). Type 3 pneumococcus was more commonly isolated from patients with than from patients without necrotizing pneumonia (P=0.05), but a total of 10 serotypes were identified among 16 cases in which the organism was available for typing. ^ Conclusions. Necrotizing changes in the lungs were seen in 6.6% (95% CI 4.0 - 9.1) of a large series of adults with pneumococcal pneumonia. Patients with necrosis were not more likely to have bacteremia or more severe disease. Type 3 pneumococcus was commonly implicated, but 9 other serotypes were also identified.^
Resumo:
Background: Obesity is a major health problem in the United States that has reached epidemic proportions. With most U.S adults spending the majority of their waking hours at work, the influence of the workplace environment on obesity is gaining in importance. Recent research implicates worksites as providing an 'obesogenic' environment as they encourage overeating and reduce the opportunity for physical activity. Objective: The aim of this study is to describe the nutrition and physical activity environment of Texas Medical Center (TMC) hospitals participating in the Shape Up Houston evaluation study to develop a scoring system to quantify the environmental data collected using the Environmental Assessment Tool (EAT) survey and to assess the inter-observer reliability of using the EAT survey. Methods: A survey instrument that was adapted from the Environmental Assessment Tool (EAT) developed by Dejoy DM et al in 2008 to measure the hospital environmental support for nutrition and physical activity was used for this study. The inter-observer reliability of using the EAT survey was measured and total percent agreement scores were computed. Most responses on the EAT survey are dichotomous (Yes and No) and these responses were coded with a '0' for a 'no' response and a '1' for a 'yes' response. A summative scoring system was developed to quantify these responses. Each hospital was given a score for each scale and subscale on the EAT survey in addition to a total score. All analyses were conducted using Stata 11 software. Results: High inter-observer reliability is observed using EAT. The percentage agreement scores ranged from 94.4%–100%. Only 2 of the 5 hospitals had a fitness facility onsite and scores for exercise programs and outdoor facilities available for hospital employees ranged from 0–62% and 0–37.5%, respectively. The healthy eating percentage for hospital cafeterias range from 42%–92% across the different hospitals while the healthy vending scores were 0%–40%. The total TMC 'healthy hospital' score was 49%. Conclusion: The EAT survey is a reliable instrument for measuring the physical activity and nutrition support environment of hospital worksites. The study results showed a large variability among the TMC hospitals in the existing physical activity and nutrition support environment. This study proposes cost effective policy changes that can increase environmental support to healthy eating and active living among TMC hospital employees.^
Resumo:
Daphnia was collected from five subarctic ponds which differed greatly in their DOC contents and, consequently, their underwater light (UV) climates. Irrespective of which Daphnia species was present, and contrary to expectations, the ponds with the lowest DOC concentrations (highest UV radiation levels) contained Daphnia with the highest eicosapentaenoic acid (EPA) concentrations. In addition, EPA concentrations in these Daphnia generally decreased in concert with seasonally increasing DOC concentrations. Daphnia from three of the ponds was also tested for its tolerance to solar ultraviolet radiation (UVR) with respect to survival. Daphnia pulex from the clear water pond showed, by far, the best UV-tolerance, followed by D. longispina from the moderately humic and D. longispina from the very humic pond. In addition, we measured sublethal parameters related to UV-damage such as the degree to which the gut of Daphnia appeared green (as a measure of their ability to digest algae), and whether their guts appeared damaged. We developed a simple, noninvasive scoring system to quantify the proportion of the gut in which digestive processes were presumably active. This method allowed repeated measurement of the same animals over the course of the experiment. We demonstrated, for the first time, that sublethal damage of the gut precedes mortality caused by exposure to UVR. In a parallel set of experiments we fed UV-exposed and non-exposed algae to UV-exposed and non-exposed daphnids. UVR pretreatment of algae enhanced the negative effects of exposure to natural solar UV-irradiation in Daphnia. These UV-related effects were generally not specific to the species of Daphnia.
Resumo:
En personas que padecen una Lesión Medular cervical, la función de los miembros superiores se ve afectada en mayor o menor medida, dependiendo fundamentalmente del nivel de la lesión y de la severidad de la misma. El déficit en la función del miembro superior hace que la autonomía e independencia de las personas se vea reducida en la ejecución de Actividades de la Vida Diaria. En el entorno clínico, la valoración de la función del miembro superior se realiza principalmente con escalas clínicas. Algunas de ellas valoran el nivel de dependencia o independencia en la ejecución de Actividades de la Vida Diaria, como, por ejemplo, el índice de Barthel y la escala FIM (Medida de la Independencia Funcional). Otras escalas, como Jebsen-Taylor Hand Function, miden la función del miembro superior valorando la destreza y la habilidad en la ejecución de determinadas tareas funcionales. Estas escalas son generales, es decir, se pueden aplicar a distintas poblaciones de sujetos y a la presencia de distintas patologías. Sin embargo, existen otras escalas desarrolladas específicamente para valorar una patología concreta, con el objetivo de hacer las evaluaciones funcionales más sensibles a cambios. Un ejemplo es la escala Spinal Cord Independence Measure (SCIM), desarrollada para valorar Lesión Medular. Las escalas clínicas son instrumentos de medida estandarizados, válidos para su uso en el entorno clínico porque se han validado en muestras grandes de pacientes. No obstante, suelen poseer una elevada componente de subjetividad que depende principalmente de la persona que puntúa el test. Otro aspecto a tener en cuenta, es que la sensibilidad de las escalas es alta, fundamentalmente, a cambios groseros en el estado de salud o en la función del miembro superior, de forma que cambios sutiles en el sujeto pueden no ser detectados. Además, en ocasiones, poseen saturaciones en el sistema de puntuación, de forma que mejorías que se puedan producir por encima de un determinado umbral no son detectadas. En definitiva, estas limitaciones hacen que las escalas clínicas no sean suficientes, por sí mismas, para evaluar estrategias motoras del miembro superior durante la ejecución de movimientos funcionales, siendo necesaria la búsqueda de instrumentos de medida que aporten objetividad, complementen las valoraciones y, al mismo tiempo, intenten solventar las limitaciones que poseen las escalas. Los estudios biomecánicos son ejemplos de métodos objetivos, en los que diversas tecnologías se pueden utilizar para recoger información de los sujetos. Una concreción de estos estudios son los estudios cinemáticos. Mediante tecnología optoelectrónica, inercial o electromagnética, estos estudios proporcionan información objetiva acerca del movimiento realizado por los sujetos, durante la ejecución de tareas concretas. Estos sistemas de medida proporcionan grandes cantidades de datos que carecen de una interpretación inmediata. Estos datos necesariamente deben ser tratados y reducidos a un conjunto de variables que, a priori, posean una interpretación más sencilla para ser utilizados en la práctica clínica. Estas han sido las principales motivaciones de esta investigación. El objetivo principal fue proponer un conjunto de índices cinemáticos que, de forma objetiva, valoren la función del miembro superior; y validar los índices propuestos en poblaciones con Lesión Medular, para su uso como instrumentos de valoración en el entorno clínico. Esta tesis se enmarca dentro de un proyecto de investigación: HYPER (Hybrid Neuroprosthetic and Neurorobotic Devices for Functional Compensation and Rehabilitation of Motor Disorders, referencia CSD2009-00067 CONSOLIDER INGENIO 2010). Dentro de este proyecto se lleva a cabo investigación en el desarrollo de modelos, para determinar los requisitos biomecánicos y los patrones de movimiento de los miembros superiores en sujetos sanos y personas con lesión medular. Además, se realiza investigación en la propuesta de nuevos instrumentos de evaluación funcional en el campo de la rehabilitación de los miembros superiores. ABSTRACT In people who have suffered a cervical Spinal Cord Injury, upper limbs function is affected to a greater or lesser extent, depending primarily on the level of the injury and the severity of it. The deficit in the upper limb function reduces the autonomy and independence of persons in the execution of Activities of Daily Living. In the clinical setting, assessment of upper limb function is mainly performed based on clinical scales. Some value the level of dependence or independence in performing activities of daily living, such as the Barthel Index and the FIM scale (Functional Independence Measure). Other scales, such as the Jebsen-Taylor Hand Function, measure upper limb function in terms of the skill and ability to perform specific functional tasks. These scales are general, so can be applied to different populations of subjects and the presence of different pathologies. However, there are other scales developed for a specific injury, in order to make the functional assessments more sensitive to changes. An example is the Spinal Cord Independence Measure (SCIM), developed for people with Spinal Cord Injury. The clinical scales are standardized instruments measure, valid for use in the clinical setting because they have been validated in large patient samples. However, they usually have a high level of subjectivity which mainly depends on the person who scores the test. Another aspect to take into account is the high sensitivity of the scales mainly to gross changes in the health status or upper limb function, so that subtle changes in the subject may not be detected. Moreover, sometimes, have saturations in the scoring system, so that improvements which may occur above a certain threshold are not detected. For these reasons, clinical scales are not enough, by themselves, to assess motor strategies used during movements. So, it’s necessary to find measure instruments that provide objectivity, supplement the assessments and, at the same time, solving the limitations that scales have. Biomechanical studies are examples of objective methods, in which several technologies can be used to collect information from the subjects. One kind of these studies is the kinematic movement analysis. By means of optoelectronics, inertial and electromagnetic technology, these studies provide objective information about the movement performed by the subjects during the execution of specific tasks. These systems provide large quantities of data without easy and intuitive interpretation. These data must necessarily be treated and reduced to a set of variables that, a priori, having a simpler interpretation for their use in the clinical practice. These were the main motivations of this research. The main objective was to propose a set of kinematic indices, or metrics that, objectively, assess the upper limb function and validate the proposed rates in populations with Spinal Cord Injury, for use as assessment tools in the clinical setting. This dissertation is framed within a research project: HYPER (Neurorobotic Devices for Functional Compensation and Rehabilitation of Motor Disorders, grant CSD2009- 00067 CONSOLIDER INGENIO 2010). Within this research project, research is conducted in relation to the biomechanical models development for determining the biomechanical requirements and movement patterns of the upper limb in healthy and people with Spinal Cord Injury. Moreover, research is conducted with respect to the proposed of new functional assessment instruments in the field of upper limb rehabilitation.