890 resultados para Scale validation process
Resumo:
Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database. 48 Caucasian and Asian bone models (left and right) from the database were used for the preliminary optimization process and validation of the fibula plate. The implant was constructed to fit bilaterally in a lateral position of the fibula. Then a biomechanical comparison of the designed implant to the current gold standard in the treatment of distal fibular fractures (locking 1/3 tubular plate) was conducted. Finally, a clinical surveillance study to evaluate the grade of implant fit achieved was performed. The results showed that with a virtual anatomic database it was possible to design a fibula plate with an optimized fit for a large proportion of the population. Biomechanical testing showed the novel fibula plate to be superior to 1/3 tubular plates in 4-point bending tests. The clinical application showed a very high degree of primary implant fit. Only in a small minority of cases further intra-operative implant bending was necessary. Therefore, the goal to develop an implant for the treatment of distal fibular fractures based on the evidence of a large anatomical database could be attained. Biomechanical testing showed good results regarding the stability and the clinical application confirmed the high grade of anatomical fit.
Resumo:
Biodegradable polymer nanoparticles have the properties necessary to address many of the issues associated with current drug delivery techniques including targeted and controlled delivery. A novel drug delivery vehicle is proposed consisting of a poly(lactic acid) nanoparticle core, with a functionalized, mesoporous silica shell. In this study, the production of PLA nanoparticles is investigated using solvent displacement in both a batch and continuous manner, and the effects of various system parameters are examined. Using Pluronic F-127 as the stabilization agent throughout the study, PLA nanoparticles are produced through solvent displacement with diameters ranging from 200 to 250 nm using two different methods: dropwise addition and in an impinging jet mixer. The impinging jet mixer allows for easy scale-up of particle production. The concentration of surfactant and volume of quench solution is found to have minimal impact on particle diameter; however, the concentration of PLA is found to significantly impact the diameter mean and polydispersity. In addition, the stability of the PLA nanoparticles is observed to increase as residual THF is evaporated. Lastly, the isolated PLA nanoparticles are coated with a silica shell using the Stöber Process. It is found that functionalizing the silica with a phosphonic silane in the presence of excess Pluronic F-127 decreases coalescence of the particles during the coating process. Future work should be conducted to fine-tune the PLA nanoparticle synthesis process by understanding the effect of other system parameters and in synthesizing mesoporous silica shells.
Resumo:
In chronic haemodialysis patients, anaemia is a frequent finding associated with high therapeutic costs and further expenses resulting from serial laboratory measurements. HemoHue HH1, HemoHue Ltd, is a novel tool consisting of a visual scale for the noninvasive assessment of anaemia by matching the coloration of the conjunctiva with a calibrated hue scale. The aim of the study was to investigate the usefulness of HemoHue in estimating individual haemoglobin concentrations and binary treatment outcomes in haemodialysis patients. A prospective blinded study with 80 hemodialysis patients comparing the visual haemoglobin assessment with the standard laboratory measurement was performed. Each patient's haemoglobin concentration was estimated by seven different medical and nonmedical observers with variable degrees of clinical experience on two different occasions. The estimated population mean was close to the measured one (11.06 ± 1.67 versus 11.32 ± 1.23 g/dL, P < 0.0005). A learning effect could be detected. Relative errors in individual estimates reached, however, up to 50%. Insufficient performance in predicting binary outcomes (ROC AUC: 0.72 to 0.78) and poor interrater reliability (Kappa < 0.6) further characterised this method.
Resumo:
Context-Daytime sleepiness in kidney transplant recipients has emerged as a potential predictor of impaired adherence to the immunosuppressive medication regimen. Thus there is a need to assess daytime sleepiness in clinical practice and transplant registries.Objective-To evaluate the validity of a single-item measure of daytime sleepiness integrated in the Swiss Transplant Cohort Study (STCS), using the American Educational Research Association framework.Methods-Using a cross-sectional design, we enrolled a convenience sample of 926 home-dwelling kidney transplant recipients (median age, 59.69 years; 25%-75% quartile [Q25-Q75], 50.27-59.69), 63% men; median time since transplant 9.42 years (Q25-Q75, 4.93-15.85). Daytime sleepiness was assessed by using a single item from the STCS and the 8 items of the validated Epworth Sleepiness Scale. Receiver operating characteristic curve analysis was used to determine the cutoff for the STCS daytime sleepiness item against the Epworth Sleepiness Scale score.Results-Based on the receiver operating characteristic curve analysis, a score greater than 4 on the STCS daytime sleepiness item is recommended to detect daytime sleepiness. Content validity was high as all expert reviews were unanimous. Concurrent validity was moderate (Spearman ϱ, 0.531; P< .001) and convergent validity with depression and poor sleep quality although low, was significant (ϱ, 0.235; P<.001 and ϱ, 0.318, P=.002, respectively). For the group difference validity: kidney transplant recipients with moderate, severe, and extremely severe depressive symptom scores had 3.4, 4.3, and 5.9 times higher odds of having daytime sleepiness, respectively, as compared with recipients without depressive symptoms.Conclusion-The accumulated evidence provided evidence for the validity of the STCS daytime sleepiness item as a simple screening scale for daytime sleepiness.
Resumo:
OBJECTIVES: To develop and evaluate a short form of the 24-item Geriatric Pain Measure (GPM) for use in community-dwelling older adults. DESIGN: Derivation and validation of a 12-item version of the GPM in a European and an independent U.S. sample of community-dwelling older adults. SETTING: Three community-dwelling sites in London, United Kingdom; Hamburg, Germany; Solothurn, Switzerland; and two ambulatory geriatrics clinics in Los Angeles, California. PARTICIPANTS: European sample: 1,059 community-dwelling older persons from three sites (London, UK; Hamburg, Germany; Solothurn, Switzerland); validation sample: 50 persons from Los Angeles, California, ambulatory geriatric clinics. MEASUREMENTS: Multidimensional questionnaire including self-reported demographic and clinical information. RESULTS: Based on item-to-total scale correlations in the European sample, 11 of 24 GPM items were selected for inclusion in the short form. One additional item (pain-related sleep problems) was included based on clinical relevance. In the validation sample, the Cronbach alpha of GPM-12 was 0.92 (individual subscale range 0.77-0.92), and the Pearson correlation coefficient (r) between GPM-12 and the original GPM was 0.98. The correlation between the GPM-12 and the McGill Pain Questionnaire was 0.63 (P<.001), similar to the correlation between the original GPM and the McGill Pain Questionnaire (Pearson r=0.63; P<.001). Exploratory factor analysis indicated that the GPM-12 covers three subfactors (pain intensity, pain with ambulation, disengagement because of pain). CONCLUSION: The GPM-12 demonstrated good validity and reliability in these European and U.S. populations of older adults. Despite its brevity, the GPM-12 captures the multidimensional nature of pain in three subscales. The self-administered GPM-12 may be useful in the clinical assessment process and management of pain and in pain-related research in older persons.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
BACKGROUND: Intracoronary application of BM-derived cells for the treatment of acute myocardial infarction (AMI) is currently being studied intensively. Simultaneously, strict legal requirements surround the production of cells for clinical studies. Thus good manufacturing practice (GMP)-compliant collection and preparation of BM for patients with AMI was established by the Cytonet group. METHODS: As well as fulfillment of standard GMP requirements, including a manufacturing license, validation of the preparation process and the final product was performed. Whole blood (n=6) and BM (n=3) validation samples were processed under GMP conditions by gelafundin or hydroxyethylstarch sedimentation in order to reduce erythrocytes/platelets and volume and to achieve specifications defined in advance. Special attention was paid to the free potassium (<6 mmol/L), some rheologically relevant cellular characteristics (hematocrit <0.45, platelets <450 x 10(6)/mL) and the sterility of the final product. RESULTS: The data were reviewed and GMP compliance was confirmed by the German authorities (Paul-Ehrlich Institute). Forty-five BM cell preparations for clinical use were carried out following the validated methodology and standards. Additionally three selections of CD34+ BM cells for infusion were performed. All specification limits were met. Discussion In conclusion, preparation of BM cells for intracoronary application is feasible under GMP conditions. As the results of sterility testing may not be available at the time of intracoronary application, the highest possible standards to avoid bacterial and other contaminations have to be applied. The increased expense of the GMP-compliant process can be justified by higher safety for patients and better control of the final product.
Resumo:
Polycarbonate (PC) is an important engineering thermoplastic that is currently produced in large industrial scale using bisphenol A and monomers such as phosgene. Since phosgene is highly toxic, a non-phosgene approach using diphenyl carbonate (DPC) as an alternative monomer, as developed by Asahi Corporation of Japan, is a significantly more environmentally friendly alternative. Other advantages include the use of CO2 instead of CO as raw material and the elimination of major waste water production. However, for the production of DPC to be economically viable, reactive-distillation units are needed to obtain the necessary yields by shifting the reaction-equilibrium to the desired products and separating the products at the point where the equilibrium reaction occurs. In the field of chemical reaction engineering, there are many reactions that are suffering from the low equilibrium constant. The main goal of this research is to determine the optimal process needed to shift the reactions by using appropriate control strategies of the reactive distillation system. An extensive dynamic mathematical model has been developed to help us investigate different control and processing strategies of the reactive distillation units to increase the production of DPC. The high-fidelity dynamic models include extensive thermodynamic and reaction-kinetics models while incorporating the necessary mass and energy balance of the various stages of the reactive distillation units. The study presented in this document shows the possibility of producing DPC via one reactive distillation instead of the conventional two-column, with a production rate of 16.75 tons/h corresponding to start reactants materials of 74.69 tons/h of Phenol and 35.75 tons/h of Dimethyl Carbonate. This represents a threefold increase over the projected production rate given in the literature based on a two-column configuration. In addition, the purity of the DPC produced could reach levels as high as 99.5% with the effective use of controls. These studies are based on simulation done using high-fidelity dynamic models.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
Breaking synoptic-scale Rossby waves (RWB) at the tropopause level are central to the daily weather evolution in the extratropics and the subtropics. RWB leads to pronounced meridional transport of heat, moisture, momentum, and chemical constituents. RWB events are manifest as elongated and narrow structures in the tropopause-level potential vorticity (PV) field. A feature-based validation approach is used to assess the representation of Northern Hemisphere RWB in present-day climate simulations carried out with the ECHAM5-HAM climate model at three different resolutions (T42L19, T63L31, and T106L31) against the ERA-40 reanalysis data set. An objective identification algorithm extracts RWB events from the isentropic PV field and allows quantifying the frequency of occurrence of RWB. The biases in the frequency of RWB are then compared to biases in the time mean tropopause-level jet wind speeds. The ECHAM5-HAM model captures the location of the RWB frequency maxima in the Northern Hemisphere at all three resolutions. However, at coarse resolution (T42L19) the overall frequency of RWB, i.e. the frequency averaged over all seasons and the entire hemisphere, is underestimated by 28%.The higher-resolution simulations capture the overall frequency of RWB much better, with a minor difference between T63L31 and T106L31 (frequency errors of −3.5 and 6%, respectively). The number of large-size RWB events is significantly underestimated by the T42L19 experiment and well represented in the T106L31 simulation. On the local scale, however, significant differences to ERA-40 are found in the higher-resolution simulations. These differences are regionally confined and vary with the season. The most striking difference between T106L31 and ERA-40 is that ECHAM5-HAM overestimates the frequency of RWB in the subtropical Atlantic in all seasons except for spring. This bias maximum is accompanied by an equatorward extension of the subtropical westerlies.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
The current state of health and biomedicine includes an enormity of heterogeneous data ‘silos’, collected for different purposes and represented differently, that are presently impossible to share or analyze in toto. The greatest challenge for large-scale and meaningful analyses of health-related data is to achieve a uniform data representation for data extracted from heterogeneous source representations. Based upon an analysis and categorization of heterogeneities, a process for achieving comparable data content by using a uniform terminological representation is developed. This process addresses the types of representational heterogeneities that commonly arise in healthcare data integration problems. Specifically, this process uses a reference terminology, and associated "maps" to transform heterogeneous data to a standard representation for comparability and secondary use. The capture of quality and precision of the “maps” between local terms and reference terminology concepts enhances the meaning of the aggregated data, empowering end users with better-informed queries for subsequent analyses. A data integration case study in the domain of pediatric asthma illustrates the development and use of a reference terminology for creating comparable data from heterogeneous source representations. The contribution of this research is a generalized process for the integration of data from heterogeneous source representations, and this process can be applied and extended to other problems where heterogeneous data needs to be merged.
Resumo:
Tropical wetlands are estimated to represent about 50% of the natural wetland methane (CH4) emissions and explain a large fraction of the observed CH4 variability on timescales ranging from glacial–interglacial cycles to the currently observed year-to-year variability. Despite their importance, however, tropical wetlands are poorly represented in global models aiming to predict global CH4 emissions. This publication documents a first step in the development of a process-based model of CH4 emissions from tropical floodplains for global applications. For this purpose, the LPX-Bern Dynamic Global Vegetation Model (LPX hereafter) was slightly modified to represent floodplain hydrology, vegetation and associated CH4 emissions. The extent of tropical floodplains was prescribed using output from the spatially explicit hydrology model PCR-GLOBWB. We introduced new plant functional types (PFTs) that explicitly represent floodplain vegetation. The PFT parameterizations were evaluated against available remote-sensing data sets (GLC2000 land cover and MODIS Net Primary Productivity). Simulated CH4 flux densities were evaluated against field observations and regional flux inventories. Simulated CH4 emissions at Amazon Basin scale were compared to model simulations performed in the WETCHIMP intercomparison project. We found that LPX reproduces the average magnitude of observed net CH4 flux densities for the Amazon Basin. However, the model does not reproduce the variability between sites or between years within a site. Unfortunately, site information is too limited to attest or disprove some model features. At the Amazon Basin scale, our results underline the large uncertainty in the magnitude of wetland CH4 emissions. Sensitivity analyses gave insights into the main drivers of floodplain CH4 emission and their associated uncertainties. In particular, uncertainties in floodplain extent (i.e., difference between GLC2000 and PCR-GLOBWB output) modulate the simulated emissions by a factor of about 2. Our best estimates, using PCR-GLOBWB in combination with GLC2000, lead to simulated Amazon-integrated emissions of 44.4 ± 4.8 Tg yr−1. Additionally, the LPX emissions are highly sensitive to vegetation distribution. Two simulations with the same mean PFT cover, but different spatial distributions of grasslands within the basin, modulated emissions by about 20%. Correcting the LPX-simulated NPP using MODIS reduces the Amazon emissions by 11.3%. Finally, due to an intrinsic limitation of LPX to account for seasonality in floodplain extent, the model failed to reproduce the full dynamics in CH4 emissions but we proposed solutions to this issue. The interannual variability (IAV) of the emissions increases by 90% if the IAV in floodplain extent is accounted for, but still remains lower than in most of the WETCHIMP models. While our model includes more mechanisms specific to tropical floodplains, we were unable to reduce the uncertainty in the magnitude of wetland CH4 emissions of the Amazon Basin. Our results helped identify and prioritize directions towards more accurate estimates of tropical CH4 emissions, and they stress the need for more research to constrain floodplain CH4 emissions and their temporal variability, even before including other fundamental mechanisms such as floating macrophytes or lateral water fluxes.
Resumo:
BACKGROUND AND PURPOSE The DRAGON score predicts functional outcome in the hyperacute phase of intravenous thrombolysis treatment of ischemic stroke patients. We aimed to validate the score in a large multicenter cohort in anterior and posterior circulation. METHODS Prospectively collected data of consecutive ischemic stroke patients who received intravenous thrombolysis in 12 stroke centers were merged (n=5471). We excluded patients lacking data necessary to calculate the score and patients with missing 3-month modified Rankin scale scores. The final cohort comprised 4519 eligible patients. We assessed the performance of the DRAGON score with area under the receiver operating characteristic curve in the whole cohort for both good (modified Rankin scale score, 0-2) and miserable (modified Rankin scale score, 5-6) outcomes. RESULTS Area under the receiver operating characteristic curve was 0.84 (0.82-0.85) for miserable outcome and 0.82 (0.80-0.83) for good outcome. Proportions of patients with good outcome were 96%, 93%, 78%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 2%, 4%, 89%, and 97% for 0 to 1, 2, 3, 8, and 9 to 10 points, respectively. When tested separately for anterior and posterior circulation, there was no difference in performance (P=0.55); areas under the receiver operating characteristic curve were 0.84 (0.83-0.86) and 0.82 (0.78-0.87), respectively. No sex-related difference in performance was observed (P=0.25). CONCLUSIONS The DRAGON score showed very good performance in the large merged cohort in both anterior and posterior circulation strokes. The DRAGON score provides rapid estimation of patient prognosis and supports clinical decision-making in the hyperacute phase of stroke care (eg, when invasive add-on strategies are considered).
Resumo:
BACKGROUND & Aims: Standardized instruments are needed to assess the activity of eosinophilic esophagitis (EoE), to provide endpoints for clinical trials and observational studies. We aimed to develop and validate a patient-reported outcome (PRO) instrument and score, based on items that could account for variations in patients' assessments of disease severity. We also evaluated relationships between patients' assessment of disease severity and EoE-associated endoscopic, histologic, and laboratory findings. METHODS We collected information from 186 patients with EoE in Switzerland and the US (69.4% male; median age, 43 years) via surveys (n = 135), focus groups (n = 27), and semi-structured interviews (n = 24). Items were generated for the instruments to assess biologic activity based on physician input. Linear regression was used to quantify the extent to which variations in patient-reported disease characteristics could account for variations in patients' assessment of EoE severity. The PRO instrument was prospectively used in 153 adult patients with EoE (72.5% male; median age, 38 years), and validated in an independent group of 120 patients with EoE (60.8% male; median age, 40.5 years). RESULTS Seven PRO factors that are used to assess characteristics of dysphagia, behavioral adaptations to living with dysphagia, and pain while swallowing accounted for 67% of the variation in patients' assessment of disease severity. Based on statistical consideration and patient input, a 7-day recall period was selected. Highly active EoE, based on endoscopic and histologic findings, was associated with an increase in patient-assessed disease severity. In the validation study, the mean difference between patient assessment of EoE severity and PRO score was 0.13 (on a scale from 0 to 10). CONCLUSIONS We developed and validated an EoE scoring system based on 7 PRO items that assesses symptoms over a 7-day recall period. Clinicaltrials.gov number: NCT00939263.