30 resultados para Point Data
Resumo:
A patient-specific surface model of the proximal femur plays an important role in planning and supporting various computer-assisted surgical procedures including total hip replacement, hip resurfacing, and osteotomy of the proximal femur. The common approach to derive 3D models of the proximal femur is to use imaging techniques such as computed tomography (CT) or magnetic resonance imaging (MRI). However, the high logistic effort, the extra radiation (CT-imaging), and the large quantity of data to be acquired and processed make them less functional. In this paper, we present an integrated approach using a multi-level point distribution model (ML-PDM) to reconstruct a patient-specific model of the proximal femur from intra-operatively available sparse data. Results of experiments performed on dry cadaveric bones using dozens of 3D points are presented, as well as experiments using a limited number of 2D X-ray images, which demonstrate promising accuracy of the present approach.
Resumo:
BACKGROUND: Electrical stimulation of the P6 acupuncture point reduces the incidence of postoperative nausea and vomiting (PONV). Neuromuscular blockade during general anesthesia can be monitored with electrical peripheral nerve stimulation at the wrist. The authors tested the effect of neuromuscular monitoring over the P6 acupuncture point on the reduction of PONV. METHODS: In this prospective, double-blinded, randomized control trial, the authors investigated, with institutional review board approval and informed consent, 220 women undergoing elective laparoscopic surgery anesthetized with fentanyl, sevoflurane, and rocuronium. During anesthesia, neuromuscular blockade was monitored by a conventional nerve stimulator at a frequency of 1 Hz over the ulnar nerve (n = 110, control group) or over the median nerve (n = 110, P6 group) stimulating at the P6 acupuncture point at the same time. The authors evaluated the incidence of nausea and vomiting during the first 24 h. RESULTS: No differences in demographic and morphometric data were found between both groups. The 24-h incidence of PONV was 45% in the P6 acupuncture group versus 61% in the control group (P = 0.022). Nausea decreased from 56% in the control group to 40% in the P6 group (P = 0.022), but emesis decreased only from 28% to 23% (P = 0.439). Nausea decreased substantially during the first 6 h of the observation period (P = 0.009). Fewer subjects in the acupuncture group required ondansetron as rescue therapy (27% vs. 39%; P = 0.086). CONCLUSION: Intraoperative P6 acupuncture point stimulation with a conventional nerve stimulator during surgery significantly reduced the incidence of PONV over 24 h. The efficacy of P6 stimulation is similar to that of commonly used antiemetic drugs in the prevention of PONV.
Resumo:
Aprotinin is widely used in cardiac surgery to reduce postoperative bleeding and the need for blood transfusion. Controversy exists regarding the influence of aprotinin on renal function and its effect on the incidence of perioperative myocardial infarction (MI) and cerebrovascular incidents (CVI). In the present study, we analyzed the incidence of these adverse events in patients who underwent coronary artery bypass grafting (CABG) surgery under full-dose aprotinin and compared the data with those recently reported by Mangano et al [2006]. For 751 consecutive patients undergoing CABG surgery under full-dose aprotinin (>4 million kalikrein-inhibitor units) we analyzed in-hospital data on renal dysfunction or failure, MI (defined as creatine kinase-myocardial band > 60 iU/L), and CVI (defined as persistent or transient neurological symptoms and/or positive computed tomographic scan). Average age was 67.0 +/- 9.9 years, and patient pre- and perioperative characteristics were similar to those in the Society of Thoracic Surgeons database. The mortality (2.8%) and incidence of renal failure (5.2%) ranged within the reported results. The incidence rates of MI (8% versus 16%; P < .01) and CVI (2% versus 6%; P < .01) however, were significantly lower than those reported by Mangano et al. Thus the data of our single center experience do not confirm the recently reported negative effect of full-dose aprotinin on the incidence of MI and CVI. Therefore, aprotinin may still remain a valid option to reduce postoperative bleeding, especially because of the increased use of aggressive fibrinolytic therapy following percutaneous transluminal coronary angioplasty.
Resumo:
BACKGROUND: Bleeding is a frequent complication during surgery. The intraoperative administration of blood products, including packed red blood cells, platelets and fresh frozen plasma (FFP), is often live saving. Complications of blood transfusions contribute considerably to perioperative costs and blood product resources are limited. Consequently, strategies to optimize the decision to transfuse are needed. Bleeding during surgery is a dynamic process and may result in major blood loss and coagulopathy due to dilution and consumption. The indication for transfusion should be based on reliable coagulation studies. While hemoglobin levels and platelet counts are available within 15 minutes, standard coagulation studies require one hour. Therefore, the decision to administer FFP has to be made in the absence of any data. Point of care testing of prothrombin time ensures that one major parameter of coagulation is available in the operation theatre within minutes. It is fast, easy to perform, inexpensive and may enable physicians to rationally determine the need for FFP. METHODS/DESIGN: The objective of the POC-OP trial is to determine the effectiveness of point of care prothrombin time testing to reduce the administration of FFP. It is a patient and assessor blind, single center randomized controlled parallel group trial in 220 patients aged between 18 and 90 years undergoing major surgery (any type, except cardiac surgery and liver transplantation) with an estimated blood loss during surgery exceeding 20% of the calculated total blood volume or a requirement of FFP according to the judgment of the physicians in charge. Patients are randomized to usual care plus point of care prothrombin time testing or usual care alone without point of care testing. The primary outcome is the relative risk to receive any FFP perioperatively. The inclusion of 110 patients per group will yield more than 80% power to detect a clinically relevant relative risk of 0.60 to receive FFP of the experimental as compared with the control group. DISCUSSION: Point of care prothrombin time testing in the operation theatre may reduce the administration of FFP considerably, which in turn may decrease costs and complications usually associated with the administration of blood products. TRIAL REGISTRATION: NCT00656396.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
he physics program of the NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) experiment at the CERN SPS consists of three subjects. In the first stage of data taking (2007-2009) measurements of hadron production in hadron-nucleus interactions needed for neutrino (T2K) and cosmic-ray (Pierre Auger and KASCADE) experiments will be performed. In the second stage (2009-2010) hadron production in proton-proton and proton-nucleus interactions needed as reference data for a better understanding of nucleus-nucleus reactions will be studied. In the third stage (2009-2013) energy dependence of hadron production properties will be measured in p+p, p+Pb interactions and nucleus-nucleus collisions, with the aim to identify the properties of the onset of deconfinement and find evidence for the critical point of strongly interacting matter. The NA61 experiment was approved at CERN in June 2007. The first pilot run was performed during October 2007. Calibrations of all detector components have been performed successfully and preliminary uncorrected spectra have been obtained. High quality of track reconstruction and particle identification similar to NA49 has been achieved. The data and new detailed simulations confirm that the NA61 detector acceptance and particle identification capabilities cover the phase space required by the T2K experiment. This document reports on the progress made in the calibration and analysis of the 2007 data.
Resumo:
BACKGROUND There is ongoing debate on the optimal drug-eluting stent (DES) in diabetic patients with coronary artery disease. Biodegradable polymer drug-eluting stents (BP-DES) may potentially improve clinical outcomes in these high-risk patients. We sought to compare long-term outcomes in patients with diabetes treated with biodegradable polymer DES vs. durable polymer sirolimus-eluting stents (SES). METHODS We pooled individual patient-level data from 3 randomized clinical trials (ISAR-TEST 3, ISAR-TEST 4 and LEADERS) comparing biodegradable polymer DES with durable polymer SES. Clinical outcomes out to 4years were assessed. The primary end point was the composite of cardiac death, myocardial infarction and target-lesion revascularization. Secondary end points were target lesion revascularization and definite or probable stent thrombosis. RESULTS Of 1094 patients with diabetes included in the present analysis, 657 received biodegradable polymer DES and 437 durable polymer SES. At 4years, the incidence of the primary end point was similar with BP-DES versus SES (hazard ratio=0.95, 95% CI=0.74-1.21, P=0.67). Target lesion revascularization was also comparable between the groups (hazard ratio=0.89, 95% CI=0.65-1.22, P=0.47). Definite or probable stent thrombosis was significantly reduced among patients treated with BP-DES (hazard ratio=0.52, 95% CI=0.28-0.96, P=0.04), a difference driven by significantly lower stent thrombosis rates with BP-DES between 1 and 4years (hazard ratio=0.15, 95% CI=0.03-0.70, P=0.02). CONCLUSIONS In patients with diabetes, biodegradable polymer DES, compared to durable polymer SES, were associated with comparable overall clinical outcomes during follow-up to 4years. Rates of stent thrombosis were significantly lower with BP-DES.
Resumo:
BACKGROUND Monitoring of HIV viral load in patients on combination antiretroviral therapy (ART) is not generally available in resource-limited settings. We examined the cost-effectiveness of qualitative point-of-care viral load tests (POC-VL) in sub-Saharan Africa. DESIGN Mathematical model based on longitudinal data from the Gugulethu and Khayelitsha township ART programmes in Cape Town, South Africa. METHODS Cohorts of patients on ART monitored by POC-VL, CD4 cell count or clinically were simulated. Scenario A considered the more accurate detection of treatment failure with POC-VL only, and scenario B also considered the effect on HIV transmission. Scenario C further assumed that the risk of virologic failure is halved with POC-VL due to improved adherence. We estimated the change in costs per quality-adjusted life-year gained (incremental cost-effectiveness ratios, ICERs) of POC-VL compared with CD4 and clinical monitoring. RESULTS POC-VL tests with detection limits less than 1000 copies/ml increased costs due to unnecessary switches to second-line ART, without improving survival. Assuming POC-VL unit costs between US$5 and US$20 and detection limits between 1000 and 10,000 copies/ml, the ICER of POC-VL was US$4010-US$9230 compared with clinical and US$5960-US$25540 compared with CD4 cell count monitoring. In Scenario B, the corresponding ICERs were US$2450-US$5830 and US$2230-US$10380. In Scenario C, the ICER ranged between US$960 and US$2500 compared with clinical monitoring and between cost-saving and US$2460 compared with CD4 monitoring. CONCLUSION The cost-effectiveness of POC-VL for monitoring ART is improved by a higher detection limit, by taking the reduction in new HIV infections into account and assuming that failure of first-line ART is reduced due to targeted adherence counselling.
Resumo:
This article presents a new response time measure of evaluations, the Evaluative Movement Assessment (EMA). Two properties are verified for the first time in a response time measure: (a) mapping of multiple attitude objects to a single scale, and (b) centering that scale around a neutral point. Property (a) has implications when self-report and response time measures of attitudes have a low correlation. A study using EMA as an indirect measure revealed a low correlation with self-reported attitudes when the correlation reflected between-subjects differences in preferences for one attitude object to a second. Previously this result may have been interpreted as dissociation between two measures. However, when correlations from the same data reflected within-subject preference rank orders between multiple attitude objects, they were substantial (average r = .64). This result suggests that the presence of low correlations between self-report and response time measures in previous studies may be a reflection of methodological aspects of the response time measurement techniques. Property (b) has implications for exploring theoretical questions that require assessment of whether an evaluation is positive or negative (e.g., prejudice), because it allows such classifications in response time measurement to be made for the first time.
Resumo:
BACKGROUND & AIMS: Recently, genetic variations in MICA (lead single nucleotide polymorphism [SNP] rs2596542) were identified by a genome-wide association study (GWAS) to be associated with hepatitis C virus (HCV)-related hepatocellular carcinoma (HCC) in Japanese patients. In the present study, we sought to determine whether this SNP is predictive of HCC development in the Caucasian population as well. METHODS: An extended region around rs2596542 was genotyped in 1924 HCV-infected patients from the Swiss Hepatitis C Cohort Study (SCCS). Pair-wise correlation between key SNPs was calculated both in the Japanese and European populations (HapMap3: CEU and JPT). RESULTS: To our surprise, the minor allele A of rs2596542 in proximity of MICA appeared to have a protective impact on HCC development in Caucasians, which represents an inverse association as compared to the one observed in the Japanese population. Detailed fine-mapping analyses revealed a new SNP in HCP5 (rs2244546) upstream of MICA as strong predictor of HCV-related HCC in the SCCS (univariable p=0.027; multivariable p=0.0002, odds ratio=3.96, 95% confidence interval=1.90-8.27). This newly identified SNP had a similarly directed effect on HCC in both Caucasian and Japanese populations, suggesting that rs2244546 may better tag a putative true variant than the originally identified SNPs. CONCLUSIONS: Our data confirms the MICA/HCP5 region as susceptibility locus for HCV-related HCC and identifies rs2244546 in HCP5 as a novel tagging SNP. In addition, our data exemplify the need for conducting meta-analyses of cohorts of different ethnicities in order to fine map GWAS signals.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
This article gives details of our proposal to replace ordinary chiral SU(3)L×SU(3)R perturbation theory χPT3 by three-flavor chiral-scale perturbation theory χPTσ. In χPTσ, amplitudes are expanded at low energies and small u,d,s quark masses about an infrared fixed point αIR of three-flavor QCD. At αIR, the quark condensate ⟨q¯q⟩vac≠0 induces nine Nambu-Goldstone bosons: π,K,η, and a 0++ QCD dilaton σ. Physically, σ appears as the f0(500) resonance, a pole at a complex mass with real part ≲ mK. The ΔI=1/2 rule for nonleptonic K decays is then a consequence of χPTσ, with a KSσ coupling fixed by data for γγ→ππ and KS→γγ. We estimate RIR≈5 for the nonperturbative Drell-Yan ratio R=σ(e+e−→hadrons)/σ(e+e−→μ+μ−) at αIR and show that, in the many-color limit, σ/f0 becomes a narrow qq¯ state with planar-gluon corrections. Rules for the order of terms in χPTσ loop expansions are derived in Appendix A and extended in Appendix B to include inverse-power Li-Pagels singularities due to external operators. This relates to an observation that, for γγ channels, partial conservation of the dilatation current is not equivalent to σ-pole dominance.
Resumo:
BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.
Resumo:
Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.