816 resultados para Gold standard.
Resumo:
OBJECTIVE: The discipline of clinical neuropsychiatry currently provides specialised services for a number of conditions that cross the traditional boundaries of neurology and psychiatry, including non-epileptic attack disorder. Neurophysiological investigations have an important role within neuropsychiatry services, with video-electroencephalography (EEG) telemetry being the gold standard investigation for the differential diagnosis between epileptic seizures and non-epileptic attacks. This article reviews existing evidence on best practices for neurophysiology investigations, with focus on safety measures for video-EEG telemetry. METHODS: We conducted a systematic literature review using the PubMed database in order to identify the scientific literature on the best practices when using neurophysiological investigations in patients with suspected epileptic seizures or non-epileptic attacks. RESULTS: Specific measures need to be implemented for video-EEG telemetry to be safely and effectively carried out by neuropsychiatry services. A confirmed diagnosis of non-epileptic attack disorder following video-EEG telemetry carried out within neuropsychiatry units has the inherent advantage of allowing diagnosis communication and implementation of treatment strategies in a timely fashion, potentially improving clinical outcomes and cost-effectiveness significantly. CONCLUSION: The identified recommendations set the stage for the development of standardised guidelines to enable neuropsychiatry services to implement streamlined and evidence-based care pathways.
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.
An investigation of primary human cell sources and clinical scaffolds for articular cartilage repair
Resumo:
Damage to articular cartilage of the knee can be debilitating because it lacks the capacity to repair itself and can progress to degenerative disorders such as osteoarthritis. The current gold standard for treating cartilage defects is autologous chondrocyte implantation (ACI). However, one of the major limitations of ACI is the use of chondrocytes, which dedifferentiate when grown in vitro and lose their phenotype. It is not clear whether the dedifferentiated chondrocytes can fully redifferentiate upon in vivo transplantation. Studies have suggested that undifferentiated mesenchymal stem or stromal cells (MSCs) from bone marrow (BM) and adipose tissue (AT) can undergo chondrogenic differentiation. Therefore, the main aim of this thesis was to examine BM and AT as a cell source for chondrogenesis using clinical scaffolds. Initially, freshly isolated cells were compared with culture expanded MSCs from BM and AT in Chondro-Gide®, Alpha Chondro Shield® and Hyalofast™. MSCs were shown to grow better in the three scaffolds compared to freshly isolated cells. BM MSCs in Chondro-Gide® were shown to have increased deposition of cartilage specific extracellular matrix (ECM) compared to AT MSCs. Further, this thesis has sought to examine whether CD271 selected MSCs from AT were more chondrogenic than MSCs selected on the basis of plastic adherence (PA). It was shown that CD271+MSCs may have superior chondrogenic properties in vitro and in vivo in terms of ECM deposition. The repair tissue seen after CD271+MSC transplantation combined with Alpha Chondro Shield® was also less vascularised than that seen after transplantation with PA MSCs in the same scaffold, suggesting antiangiogenic activity. Since articular cartilage is an avascular tissue, CD271+MSCs may be a better suited cell type compared to the PA MSCs. Hence, this study has increased the current understanding of how different cell-scaffold combinations may best be used to promote articular cartilage repair.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Three-Dimensional (3-D) imaging is vital in computer-assisted surgical planning including minimal invasive surgery, targeted drug delivery, and tumor resection. Selective Internal Radiation Therapy (SIRT) is a liver directed radiation therapy for the treatment of liver cancer. Accurate calculation of anatomical liver and tumor volumes are essential for the determination of the tumor to normal liver ratio and for the calculation of the dose of Y-90 microspheres that will result in high concentration of the radiation in the tumor region as compared to nearby healthy tissue. Present manual techniques for segmentation of the liver from Computed Tomography (CT) tend to be tedious and greatly dependent on the skill of the technician/doctor performing the task. ^ This dissertation presents the development and implementation of a fully integrated algorithm for 3-D liver and tumor segmentation from tri-phase CT that yield highly accurate estimations of the respective volumes of the liver and tumor(s). The algorithm as designed requires minimal human intervention without compromising the accuracy of the segmentation results. Embedded within this algorithm is an effective method for extracting blood vessels that feed the tumor(s) in order to plan effectively the appropriate treatment. ^ Segmentation of the liver led to an accuracy in excess of 95% in estimating liver volumes in 20 datasets in comparison to the manual gold standard volumes. In a similar comparison, tumor segmentation exhibited an accuracy of 86% in estimating tumor(s) volume(s). Qualitative results of the blood vessel segmentation algorithm demonstrated the effectiveness of the algorithm in extracting and rendering the vasculature structure of the liver. Results of the parallel computing process, using a single workstation, showed a 78% gain. Also, statistical analysis carried out to determine if the manual initialization has any impact on the accuracy showed user initialization independence in the results. ^ The dissertation thus provides a complete 3-D solution towards liver cancer treatment planning with the opportunity to extract, visualize and quantify the needed statistics for liver cancer treatment. Since SIRT requires highly accurate calculation of the liver and tumor volumes, this new method provides an effective and computationally efficient process required of such challenging clinical requirements.^
Resumo:
For children with intractable seizures, surgical removal of epileptic foci, if identifiable and feasible, can be an effective way to reduce or eliminate seizures. The success of this type of surgery strongly hinges upon the ability to identify and demarcate those epileptic foci. The ultimate goal of this research project is to develop an effective technology for detection of unique in vivo pathophysiological characteristics of epileptic cortex and, subsequently, to use this technology to guide epilepsy surgery intraoperatively. In this PhD dissertation the feasibility of using optical spectroscopy to identify uniquein vivo pathophysiological characteristics of epileptic cortex was evaluated and proven using the data collected from children undergoing epilepsy surgery. ^ In this first in vivo human study, static diffuse reflectance and fluorescence spectra were measured from the epileptic cortex, defined by intraoperative ECoG, and its surrounding tissue from pediatric patients undergoing epilepsy surgery. When feasible, biopsy samples were taken from the investigated sites for the subsequent histological analysis. Using the histological data as the gold standard, spectral data was analyzed with statistical tools. The results of the analysis show that static diffuse reflectance spectroscopy and its combination with static fluorescence spectroscopy can be used to effectively differentiate between epileptic cortex with histopathological abnormalities and normal cortex in vivo with a high degree of accuracy. ^ To maximize the efficiency of optical spectroscopy in detecting and localizing epileptic cortex intraoperatively, the static system was upgraded to investigate histopathological abnormalities deep within the epileptic cortex, as well as to detect unique temporal pathophysiological characteristics of epileptic cortex. Detection of deep abnormalities within the epileptic cortex prompted a redesign of the fiberoptic probe. A mechanical probe holder was also designed and constructed to maintain the probe contact pressure and contact point during the time dependent measurements. The dynamic diffuse reflectance spectroscopy system was used to characterize in vivo pediatric epileptic cortex. The results of the study show that some unique wavelength dependent temporal characteristics (e.g., multiple horizontal bands in the correlation coefficient map γ(λref = 800 nm, λcomp ,t)) can be found in the time dependent recordings of diffuse reflectance spectra from epileptic cortex defined by ECoG.^
Resumo:
OBJECTIVE: To evaluate the validity of hemoglobin A1C (A1C) as a diagnostic tool for type 2 diabetes and to determine the most appropriate A1C cutoff point for diagnosis in a sample of Haitian-Americans. SUBJECTS AND METHODS: Subjects (n = 128) were recruited from Miami-Dade and Broward counties, FL. Receiver operating characteristics (ROC) analysis was run in order to measure sensitivity and specificity of A1C for detecting diabetes at different cutoff points. RESULTS: The area under the ROC curve was 0.86 using fasting plasma glucose ≥ 7.0 mmol/L as the gold standard. An A1C cutoff point of 6.26% had sensitivity of 80% and specificity of 74%, whereas an A1C cutoff point of 6.50% (recommended by the American Diabetes Association – ADA) had sensitivity of 73% and specificity of 89%. CONCLUSIONS: A1C is a reliable alternative to fasting plasma glucose in detecting diabetes in this sample of Haitian-Americans. A cutoff point of 6.26% was the optimum value to detect type 2 diabetes.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Inaccurate diagnosis of vulvovaginitis generates inadequate treatments that cause damages women's health. Objective: evaluate the effectiveness of methods when diagnosing vulvovaginitis. Method: a cross-sectional study was performed with 200 women who complained about vaginal discharge. Vaginal smear was collected for microbiological tests, considering the gram stain method as gold standard. The efficacy of the available methods for diagnosis of vaginal discharge was assessed (sensitivity, specificity, positive predictive value and negative predictive value). Data were inserted to Graphpad Prism 6, for statistical analysis. Results: the following results were obtained: wet mount for vaginal candidiasis: sensitivity = 31%; specificity = 97%; positive predictive value (PPV) = 54%; negative predictive value (NPV) =93%; accuracy = 91%. Wet mount for bacterial vaginosis: sensitivity = 80%; specificity =95%; positive predictive value (PPV) = 80%; negative predictive value (NPV) = 95%; accuracy = 92%. Syndromic approach for bacterial vaginosis: sensitivity = 95%; specificity=43%; positive predictive value (PPV) =30%; negative predictive value (NPV) = 97%; accuracy = 54%. Syndromic approach for vaginal candidiasis: sensitivity = 75%; specificity =91%; positive predictive value (PPV) = 26%; negative predictive value (NPV) = 98%; accuracy = 90%. Pap smear for vaginal candidiasis: sensitivity = 68%, specificity = 98%; positive predictive value (PPV) = 86%; negative predictive value (NPV) =96%; accuracy = 96%. Pap smear for bacterial vaginosis: sensitivity = 75%; specificity = 100%; positive predictive value (PPV) = 100%; negative predictive value (NPV) =94%; accuracy = 95%. There was only one case of vaginal trichomoniasis reported – diagnosed by oncological cytology and wet mount – confirmed by Gram. The syndromic approach diagnosed it as bacterial vaginosis. From the data generated and with support on world literature, the Maternidade Escola Januário Cicco’s vulvovaginitis protocol was constructed. Conclusion: Pap smear and wet mount showed respectively low and very low sensitivity for vaginal candidiasis. Syndromic approach presented very low specificity and accuracy for bacterial vaginosis, which implies a large number of patients who are diagnosed or treated incorrectly.
Resumo:
Chronic Hepatitis C is the leading cause of chronic liver disease in advanced final stage of hepatocellular carcinoma (HCC) and of death related to liver disease. Evolves progressively in time 20-30 years. Evolutionary rates vary depending on factors virus, host and behavior. This study evaluated the impact of hepatitis C on the lives of patients treated at a referral service in Hepatology of the University Hospital Onofre Lopes - Liver Study Group - from May 1995 to December 2013. A retrospective evaluation was performed on 10,304 records, in order to build a cohort of patients with hepatitis C, in which all individuals had their diagnosis confirmed by gold standard molecular biological test. Data were obtained directly from patient charts and recorded in an Excel spreadsheet, previously built, following an elaborate encoding with the study variables, which constitute individual data and prognostic factors defined in the literature in the progression of chronic hepatitis C. The Research Ethics Committee approved the project. The results were statistically analyzed with the Chi-square test and Fisher's exact used to verify the association between variable for the multivariate analysis, we used the Binomial Logistic regression method. For both tests, it was assumed significance p < 0.05 and 95%. The results showed that the prevalence of chronic hepatitis C in NEF was 4.96 %. The prevalence of cirrhosis due to hepatitis C was 13.7%. The prevalence of diabetes in patients with Hepatitis C was 8.78 % and diabetes in cirrhotic patients with hepatitis C 38.0 %. The prevalence of HCC was 5.45%. The clinical follow-up discontinuation rates were 67.5 %. The mortality in confirmed cases without cirrhosis was 4.10% and 32.1% in cirrhotic patients. The factors associated with the development of cirrhosis were genotype 1 (p = 0.0015) and bilirubin > 1.3 mg % (p = 0.0017). Factors associated with mortality were age over 35 years, abandon treatment, diabetes, insulin use, AST> 60 IU, ALT> 60 IU, high total bilirubin, extended TAP, INR high, low albumin, treatment withdrawal, cirrhosis and hepatocarcinoma. The occurrence of diabetes mellitus increased mortality of patients with hepatitis C in 6 times. Variables associated with the diagnosis of cirrhosis by us were blood donor (odds ratio 0.24, p = 0.044) and professional athlete (odds ratio 0.18, p = 0.35). It is reasonable to consider a revaluation in screening models for CHC currently proposed. The condition of cirrhosis and diabetes modifies the clinical course of patients with chronical hepatitis C, making it a disease more mortality. However, being a blood donor or professional athlete is a protective factor that reduces the risk of cirrhosis, independent of alcohol consumption. Public policies to better efficient access, hosting and resolution are needed for this population.
Resumo:
Nel corso degli ultimi due decenni il trapianto di cuore si è evoluto come il gold standard per il trattamento dell’insufficienza cardiaca allo stadio terminale. Rimane un intervento estremamente complesso; infatti due sono gli aspetti fondamentali per la sua buona riuscita: la corretta conservazione e metodo di trasporto. Esistono due diversi metodi di conservazione e trasporto: il primo si basa sul tradizionale metodo di protezione miocardica e sul trasporto del cuore con utilizzo di contenitori frigoriferi, mentre il secondo, più innovativo, si basa sull’ utilizzo di Organ Care System Heart, un dispositivo appositamente progettato per contenere il cuore e mantenerlo in uno stato fisiologico normotermico attivo simulando le normali condizioni presenti all’interno del corpo umano. La nuova tecnologia di Organ Care System Heart permette un approccio completamente diverso rispetto ai metodi tradizionali, in quanto non solo conserva e trasporta il cuore, ma permette anche il continuo monitoraggio ex-vivo delle sue funzioni, dal momento in cui il cuore viene rimosso dal torace del donatore fino all’ impianto nel ricevente. Il motivo principale che spinge la ricerca ad investire molto per migliorare i metodi di protezione degli organi è legato alla possibilità di ridurre il rischio di ischemia fredda. Questo termine definisce la condizione per cui un organo rimane privo di apporto di sangue, il cui mancato afflusso causa danni via via sempre più gravi ed irreversibili con conseguente compromissione della funzionalità. Nel caso del cuore, il danno da ischemia fredda risulta significativamente ridotto grazie all’utilizzo di Organ Care System Heart con conseguenti benefici in termini di allungamento dei tempi di trasporto, ottimizzazione dell’organo e più in generale migliori risultati sui pazienti.
Resumo:
Per investigare i carichi sopportati dal corpo nella vita di tutti i giorni, è neccesario un metodo semplice per la stima delle forze di reazione piede-suolo (GRFs) . In questo studio viene presentato un modello per stimare l’andamento delle GRFs durante la corsa, a partire dalle accelerazioni misurate tramite sensori inerziali. I due soggetti che hanno partecipato all’esperimento hanno corso a 4 diverse velocità predefinite e indossato cinque sensori: uno su pelvi, due su tibia (destra e sinistra) e due su piede (destro e sinitro). A partire dai dati ottenuti è stato elaborato un modello che stima l’andamento delle GRFs (verticali e anteroposteriori) e i tempi di contatto e volo del passo tramite l’accelerazione assiale tibiale. Per la stima delle forze di reazione viene utilizzato un modello di stima basato sui tempi di contatto e volo, unito ad un modello che prevede la presenza o meno e il modulo degli impact peak sfruttando due picchi negativi individuati nelle accelerazioni assiali tibiali. Sono state utilizzate due pedane di carico come gold standard per valutare la qualità delle stime ottenute. Il modello prevede correttamente la presenza dell'impact peak nell'85% dei casi, con un errore sul modulo compreso fra il 6% e il 9%. Le GRFs verticali massime vengono approssimate con un errore fra l'1% e 5%, mentre le GRFs antero-posteriori con un errore fra l'8% e il 14% del range massimo-minimo del segnale.
Resumo:
Tra le patologie ossee attualmente riconosciute, l’osteoporosi ricopre il ruolo di protagonista data le sua diffusione globale e la multifattorialità delle cause che ne provocano la comparsa. Essa è caratterizzata da una diminuzione quantitativa della massa ossea e da alterazioni qualitative della micro-architettura del tessuto osseo con conseguente aumento della fragilità di quest’ultimo e relativo rischio di frattura. In campo medico-scientifico l’imaging con raggi X, in particolare quello tomografico, da decenni offre un ottimo supporto per la caratterizzazione ossea; nello specifico la microtomografia, definita attualmente come “gold-standard data la sua elevata risoluzione spaziale, fornisce preziose indicazioni sulla struttura trabecolare e corticale del tessuto. Tuttavia la micro-CT è applicabile solo in-vitro, per cui l’obiettivo di questo lavoro di tesi è quello di verificare se e in che modo una diversa metodica di imaging, quale la cone-beam CT (applicabile invece in-vivo), possa fornire analoghi risultati, pur essendo caratterizzata da risoluzioni spaziali più basse. L’elaborazione delle immagini tomografiche, finalizzata all’analisi dei più importanti parametri morfostrutturali del tessuto osseo, prevede la segmentazione delle stesse con la definizione di una soglia ad hoc. I risultati ottenuti nel corso della tesi, svolta presso il Laboratorio di Tecnologia Medica dell’Istituto Ortopedico Rizzoli di Bologna, mostrano una buona correlazione tra le due metodiche quando si analizzano campioni definiti “ideali”, poiché caratterizzati da piccole porzioni di tessuto osseo di un solo tipo (trabecolare o corticale), incluso in PMMA, e si utilizza una soglia fissa per la segmentazione delle immagini. Diversamente, in casi “reali” (vertebre umane scansionate in aria) la stessa correlazione non è definita e in particolare è da escludere l’utilizzo di una soglia fissa per la segmentazione delle immagini.
Resumo:
Il lavoro presentato in questa tesi è stato svolto presso il Department of Computer Science, University of Oxford, durante il mio periodo all’estero nel Computational Biology Group. Scopo del presente lavoro è stato lo sviluppo di un modello matematico del potenziale d’azione per cellule umane cardiache di Purkinje. Tali cellule appartengono al sistema di conduzione elettrico del cuore, e sono considerate molto importanti nella genesi di aritmie. Il modello, elaborato in linguaggio Matlab, è stato progettato utilizzando la tecnica delle Popolazione di Modelli, un innovativo approccio alla modellazione cellulare sviluppato recentemente proprio dal Computational Biology Group. Tale modello è stato sviluppato in 3 fasi: • Inizialmente è stato sviluppato un nuovo modello matematico di cellula umana del Purkinje cardiaco, tenendo in considerazione i modelli precedenti disponibili in letteratura e le più recenti pubblicazioni in merito alle caratteristiche elettrofisiologiche proprie della cellula cardiaca umana di Purkinje. Tale modello è stato costruito a partire dall’attuale gold standard della modellazione cardiaca ventricolare umana, il modello pubblicato da T. O’Hara e Y. Rudy nel 2011, modificandone sia le specifiche correnti ioniche che la struttura interna cellulare. • Il modello così progettato è stato, poi, utilizzato come “modello di base” per la costruzione di una popolazione di 3000 modelli, tramite la variazione di alcuni parametri del modello all’interno di uno specifico range. La popolazione così generata è stata calibrata sui dati sperimentali di cellule umane del Purkinje. A valle del processo di calibrazione si è ottenuta una popolazione di 76 modelli. • A partire dalla popolazione rimanente, è stato ricavato un nuovo modello ai valori medi, che riproduce le principali caratteristiche del potenziale d’azione di una cellula di Purkinje cardiaca umana, e che rappresenta il dataset sperimentale utilizzato nel processo di calibrazione.