951 resultados para Methods validation
Resumo:
Background: A recent microarray study identified a set of genes whose combined expression patterns were predictive of poor outcome in a cohort of adult adrenocortical tumors (ACTs). The difference between the expression values measured by qRT-PCR of DLGAP5 and PINK1 genes was the best molecular predictor of recurrence and malignancy. Among the adrenocortical carcinomas, the combined expression of BUB1B and PINK1 genes was the most reliable predictor of overall survival. The prognostic and molecular heterogeneity of ACTs raises the need to study the applicability of these molecular markers in other cohorts. Objective: To validate the combined expression of BUB1B, DLGAP5, and PINK1 as outcome predictor in ACTs from a Brazilian cohort of adult and pediatric patients. Patients and methods: BUB1B, DLGAP5, and PINK1 expression was assessed by quantitative PCR in 53 ACTs from 52 patients - 24 pediatric and 28 adults (one pediatric patient presented a bilateral asynchronous ACT). Results: DLGAP5 PINK1 and BUB1B PINK1 were strong predictors of disease-free survival and overall survival, respectively, among adult patients with ACT. In the pediatric cohort, these molecular predictors were only marginally associated with disease-free survival but not with overall survival. Conclusion: This study confirms the prognostic value of the combined expression of BUB1B, DLGAP5, and PINK1 genes in a Brazilian group of adult ACTs. Among pediatric ACTs, other molecular predictors of outcome are required.
Resumo:
Backgroud: It has been shown that different symptoms or symptom combinations of neuropathic pain (NeP) may correspond to different mechanistic backgrounds and respond differently to treatment. The Neuropathic Pain Symptom Inventory (NPSI) is able to detect distinct clusters of symptoms (i.e. dimensions) with a putative common mechanistic background. The present study described the psychometric validation of the Portuguese version (PV) of the NPSI. Methods: Patients were seen in two consecutive visits, three to four weeks apart. They were asked to: (i) rate their mean pain intensity in the last 24 hours on an 11-point (0-10) numerical scale; (ii) complete the PV-NPSI; (iii) provide the list of pain medications and doses currently in use. VAS and Global Impression of Change (GIC) were filled out in the second visit. Results: PV-NPSI underwent test-retest reliability, factor analysis, analysis of sensitivity to changes between both visits. The PV-NPSI was reliable in this setting, with a good intra-class correlation for all items. The factorial analysis showed that the PV-NPSI inventory assessed different components of neuropathic pain. Five different factors were found. The PV-NPSI was adequate to evaluate patients with neuropathic pain and to detect clusters of NeP symptoms. Conclusions: The psychometric properties of the PV-NPSI rendered it adequate to evaluate patients with both central and peripheral neuropathic pain syndromes and to detect clusters of NeP symptoms.
Resumo:
PURPOSE: To compare the direct and indirect radiographic methods for assessing the gray levels of biomaterials employing the Digora for Windows and the Adobe Photoshop CS2 systems. METHODS: Specimens of biomaterials were made following manusfacturer's instructions and placed on phosphor storage plates (PSP) and on radiographic film for subsequent gray level assessment using the direct and indirect radiographic method, respectively. The radiographic density of each biomaterial was analyzed using Adobe Photoshop CS2 and Digora for Windows software. RESULTS: The distribution of gray levels found using the direct and indirect methods suggests that higher exposure times are correlated to lower reproducibility rates between groups. CONCLUSION: The indirect method is a feasible alternative to the direct method in assessing the radiographic gray levels of biomaterials, insofar as significant reproducibility was observed between groups for the exposure times of 0.2 to 0.5 seconds.
Resumo:
Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.
Resumo:
The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.
Resumo:
Magnetic resonance imaging (MRI) is today precluded to patients bearing active implantable medical devices AIMDs). The great advantages related to this diagnostic modality, together with the increasing number of people benefiting from implantable devices, in particular pacemakers(PM)and carioverter/defibrillators (ICD), is prompting the scientific community the study the possibility to extend MRI also to implanted patients. The MRI induced specific absorption rate (SAR) and the consequent heating of biological tissues is one of the major concerns that makes patients bearing metallic structures contraindicated for MRI scans. To date, both in-vivo and in-vitro studies have demonstrated the potentially dangerous temperature increase caused by the radiofrequency (RF) field generated during MRI procedures in the tissues surrounding thin metallic implants. On the other side, the technical evolution of MRI scanners and of AIMDs together with published data on the lack of adverse events have reopened the interest in this field and suggest that, under given conditions, MRI can be safely performed also in implanted patients. With a better understanding of the hazards of performing MRI scans on implanted patients as well as the development of MRI safe devices, we may soon enter an era where the ability of this imaging modality may be more widely used to assist in the appropriate diagnosis of patients with devices. In this study both experimental measures and numerical analysis were performed. Aim of the study is to systematically investigate the effects of the MRI RF filed on implantable devices and to identify the elements that play a major role in the induced heating. Furthermore, we aimed at developing a realistic numerical model able to simulate the interactions between an RF coil for MRI and biological tissues implanted with a PM, and to predict the induced SAR as a function of the particular path of the PM lead. The methods developed and validated during the PhD program led to the design of an experimental framework for the accurate measure of PM lead heating induced by MRI systems. In addition, numerical models based on Finite-Differences Time-Domain (FDTD) simulations were validated to obtain a general tool for investigating the large number of parameters and factors involved in this complex phenomenon. The results obtained demonstrated that the MRI induced heating on metallic implants is a real risk that represents a contraindication in extending MRI scans also to patient bearing a PM, an ICD, or other thin metallic objects. On the other side, both experimental data and numerical results show that, under particular conditions, MRI procedures might be consider reasonably safe also for an implanted patient. The complexity and the large number of variables involved, make difficult to define a unique set of such conditions: when the benefits of a MRI investigation cannot be obtained using other imaging techniques, the possibility to perform the scan should not be immediately excluded, but some considerations are always needed.
Resumo:
Researches performed during the PhD course intended to assess innovative applications of near-infrared spectroscopy in reflectance (NIR) in the production chain of beer. The purpose is to measure by NIR the "malting quality" (MQ) parameter of barley, to monitor the malting process and to know if a certain type of barley is suitable for the production of beer and spirits. Moreover, NIR will be applied to monitor the brewing process. First of all, it was possible to check the quality of the raw materials like barley, maize and barley malt using a rapid, non-destructive and reliable method, with a low error of prediction. The more interesting result obtained at this level was that the repeatability of the NIR calibration models developed was comparable with the one of the reference method. Moreover, about malt, new kinds of validation were used in order to estimate the real predictive power of the proposed calibration models and to understand the long-term effects. Furthermore, the precision of all the calibration models developed for malt evaluation was estimated and statistically compared with the reference methods, with good results. Then, new calibration models were developed for monitoring the malting process, measuring the moisture content and other malt quality parameters during germination. Moreover it was possible to obtain by NIR an estimate of the "malting quality" (MQ) of barley and to predict whether if its germination will be rapid and uniform and if a certain type of barley is suitable for the production of beer and spirits. Finally, the NIR technique was applied to monitor the brewing process, using correlations between NIR spectra of beer and analytical parameters, and to assess beer quality. These innovative results are potentially very useful for the actors involved in the beer production chain, especially the calibration models suitable for the control of the malting process and for the assessment of the “malting quality” of barley, which need to be deepened in future studies.
Resumo:
The research activity characterizing the present thesis was mainly centered on the design, development and validation of methodologies for the estimation of stationary and time-varying connectivity between different regions of the human brain during specific complex cognitive tasks. Such activity involved two main aspects: i) the development of a stable, consistent and reproducible procedure for functional connectivity estimation with a high impact on neuroscience field and ii) its application to real data from healthy volunteers eliciting specific cognitive processes (attention and memory). In particular the methodological issues addressed in the present thesis consisted in finding out an approach to be applied in neuroscience field able to: i) include all the cerebral sources in connectivity estimation process; ii) to accurately describe the temporal evolution of connectivity networks; iii) to assess the significance of connectivity patterns; iv) to consistently describe relevant properties of brain networks. The advancement provided in this thesis allowed finding out quantifiable descriptors of cognitive processes during a high resolution EEG experiment involving subjects performing complex cognitive tasks.
Resumo:
In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
[ITA]La demenza consiste nel deterioramento, spesso progressivo, dello stato cognitivo di un individuo. Chi è affetto da demenza, presenta alterazioni a livello cognitivo, comportamentale e motorio, ad esempio compiendo gesti ossessivi, ripetitivi, senza uno scopo preciso. La condizione dei pazienti affetti da demenza è valutata clinicamente tramite apposite scale e le informazioni relative al comportamento vengono raccolte intervistando chi se ne occupa, come familiari, il personale infermieristico o il medico curante. Spesso queste valutazioni si rivelano inaccurate, possono essere fortemente influenzate da considerazioni soggettive, e sono dispendiose in termini di tempo. Si ha quindi l'esigenza di disporre di metodiche oggettive per valutare il comportamento motorio dei pazienti e le sue alterazioni patologiche; i sensori inerziali indossabili potrebbero costituire una valida soluzione, per questo scopo. L'obiettivo principale della presente attività di tesi è stato definire e implementare un software per una valutazione oggettiva, basata su sensori, del pattern motorio circadiano, in pazienti affetti da demenza ricoverati in un'unità di terapia a lungo termine, che potrebbe evidenziare differenze nei sintomi della malattia che interessano il comportamento motorio, come descritto in ambito clinico. Lo scopo secondario è stato quello di verificare i cambiamenti motori pre- e post-intervento in un sottogruppo di pazienti, a seguito della somministrazione di un programma sperimentale di intervento basato su esercizi fisici. --------------- [ENG]Dementia involves deterioration, often progressive, of a person's cognitive status. Those who suffer from dementia, present alterations in cognitive and motor behavior, for example performing obsessive and repetitive gestures, without a purpose. The condition of patients suffering from dementia is clinically assessed by means of specific scales and information relating to the behavior are collected by interviewing caregivers, such as the family, nurses, or the doctor. Often it turns out that these are inaccurate assessments that may be heavily influenced by subjective evaluations and are costly in terms of time. Therefore, there is the need for objective methods to assess the patients' motor behavior and the pathological changes; wearable inertial sensors may represent a viable option, so this aim. The main objective of this thesis project was to define and implement a software for a sensor-based assessment of the circadian motor pattern in patients suffering from dementia, hospitalized in a long-term care unit, which could highlight differences in the disease symptoms affecting the motor behavior, as described in the clinical setting. The secondary objective was to verify pre- and post-intervention changes in the motor patterns of a subgroup of patients, following the administration of an experimental program of intervention based on physical exercises.
Resumo:
We investigated whether human articular chondrocytes can be labeled efficiently and for long-term with a green fluorescent protein (GFP) lentivirus and whether the viral transduction would influence cell proliferation and tissue-forming capacity. The method was then applied to track goat articular chondrocytes after autologous implantation in cartilage defects. Expression of GFP in transduced chondrocytes was detected cytofluorimetrically and immunohistochemically. Chondrogenic capacity of chondrocytes was assessed by Safranin-O staining, immunostaining for type II collagen, and glycosaminoglycan content. Human articular chondrocytes were efficiently transduced with GFP lentivirus (73.4 +/- 0.5% at passage 1) and maintained the expression of GFP up to 22 weeks of in vitro culture after transduction. Upon implantation in nude mice, 12 weeks after transduction, the percentage of labeled cells (73.6 +/- 3.3%) was similar to the initial one. Importantly, viral transduction of chondrocytes did not affect the cell proliferation rate, chondrogenic differentiation, or tissue-forming capacity, either in vitro or in vivo. Goat articular chondrocytes were also efficiently transduced with GFP lentivirus (78.3 +/- 3.2%) and maintained the expression of GFP in the reparative tissue after orthotopic implantation. This study demonstrates the feasibility of efficient and relatively long-term labeling of human chondrocytes for co-culture on integration studies, and indicates the potential of this stable labeling technique for tracking animal chondrocytes for in cartilage repair studies.
An examination chair to measure internal rotation of the hip in routine settings: a validation study
Resumo:
OBJECTIVE: To determine the performance of a newly developed examination chair as compared with the clinical standard of assessing internal rotation (IR) of the flexed hip with a goniometer.
METHODS: The examination chair allowed measurement of IR in a sitting position simultaneously in both hips, with hips and knees flexed 90 degrees, lower legs hanging unsupported and a standardized load of 5 kg applied to both ankles using a bilateral pulley system. Clinical assessment of IR was performed in supine position with hips and knees flexed 90 degrees using a goniometer. Within the framework of a population-based inception cohort study, we calculated inter-observer agreement in two samples of 84 and 64 consecutive, unselected young asymptomatic males using intra-class correlation coefficients (ICC) and determined the correlation between IR assessed with examination chair and clinical assessment.
RESULTS: Inter-observer agreement was excellent for the examination chair (ICC right hip, 0.92, 95% confidence interval [CI] 0.89-0.95; ICC left hip, 0.90, 95% CI 0.86-0.94), and considerably higher than that seen with clinical assessment (ICC right hip, 0.65, 95% CI 0.49-0.77; ICC left hip, 0.69, 95% CI 0.54-0.80, P for difference in ICC between examination chair and clinical assessment
Resumo:
In most pathology laboratories worldwide, formalin-fixed paraffin embedded (FFPE) samples are the only tissue specimens available for routine diagnostics. Although commercial kits for diagnostic molecular pathology testing are becoming available, most of the current diagnostic tests are laboratory-based assays. Thus, there is a need for standardized procedures in molecular pathology, starting from the extraction of nucleic acids. To evaluate the current methods for extracting nucleic acids from FFPE tissues, 13 European laboratories, participating to the European FP6 program IMPACTS (www.impactsnetwork.eu), isolated nucleic acids from four diagnostic FFPE tissues using their routine methods, followed by quality assessment. The DNA-extraction protocols ranged from homemade protocols to commercial kits. Except for one homemade protocol, the majority gave comparable results in terms of the quality of the extracted DNA measured by the ability to amplify differently sized control gene fragments by PCR. For array-applications or tests that require an accurately determined DNA-input, we recommend using silica based adsorption columns for DNA recovery. For RNA extractions, the best results were obtained using chromatography column based commercial kits, which resulted in the highest quantity and best assayable RNA. Quality testing using RT-PCR gave successful amplification of 200 bp-250 bp PCR products from most tested tissues. Modifications of the proteinase-K digestion time led to better results, even when commercial kits were applied. The results of the study emphasize the need for quality control of the nucleic acid extracts with standardised methods to prevent false negative results and to allow data comparison among different diagnostic laboratories.
Resumo:
The advantages, limitations and potential applications of available methods for studying erosion of enamel and dentine are reviewed. Special emphasis is placed on the influence of histological differences between the dental hard tissue and the stage of the erosive lesion. No method is suitable for all stages of the lesion. Factors determining the applicability of the methods are: surface condition of the specimen, type of the experimental model, nature of the lesion, need for longitudinal measurements and type of outcome. The most suitable and most widely used methods are: chemical analyses of mineral release and enamel surface hardness for early erosion, and surface profilometry and microradiography for advanced erosion. Morphological changes in eroded dental tissue have usually been characterised by scanning electron microscopy. Novel methods have also been used, but little is known of their potential and limitations. Therefore, there is a need for their further development, evaluation, consolidation and, in particular, validation.