938 resultados para Point method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive explicit lower and upper bounds for the probability generating functional of a stationary locally stable Gibbs point process, which can be applied to summary statistics such as the F function. For pairwise interaction processes we obtain further estimates for the G and K functions, the intensity, and higher-order correlation functions. The proof of the main result is based on Stein's method for Poisson point process approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Currently, the diagnosis of pedicle screw (PS) loosening is based on a subjectively assessed halo sign, that is, a radiolucent line around the implant wider than 1 mm in plain radiographs. We aimed at development and validation of a quantitative method to diagnose PS loosening on radiographs. METHODS Between 11/2004 and 1/2010 36 consecutive patients treated with thoraco-lumbar spine fusion with PS instrumentation without PS loosening were compared with 37 other patients who developed a clinically manifesting PS loosening. Three different angles were measured and compared regarding their capability to discriminate the loosened PS over the postoperative course. The inter-observer invariance was tested and a receiver operating characteristics curve analysis was performed. RESULTS The angle measured between the PS axis and the cranial endplate was significantly different between the early and all later postoperative images. The Spearman correlation coefficient for the measurements of two observers at each postoperative time point ranged between 0.89 at 2 weeks to 0.94 at 2 months and 1 year postoperative. The angle change of 1.9° between immediate postoperative and 6-month postoperative was 75% sensitive and 89% specific for the identification of loosened screws (AUC = 0.82). DISCUSSION The angle between the PS axis and the cranial endplate showed good ability to change in PS loosening. A change of this angle of at least 2° had a relatively high sensitivity and specificity to diagnose screw loosening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses two major topics concerning the role of expectations in the formation of reference points. First, we show that when expectations are present, they have a significant impact on reference point formation. Second, we find that decision-makers employ expected values when forming reference points (integrated mechanism) as opposed to single possible outcomes (segregated mechanism). Despite the importance of reference points in prospect theory, to date, there is no standard method of examining these. We develop a new experimental design that employs an indirect approach and extends an existing direct approach. Our findings are consistent across the two approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore a method developed in statistical physics which has been argued to have exponentially small finite-volume effects, in order to determine the critical temperature Tc of pure SU(3) gauge theory close to the continuum limit. The method allows us to estimate the critical coupling βc of the Wilson action for temporal extents up to Nτ∼20 with ≲0.1% uncertainties. Making use of the scale setting parameters r0 and t0−−√ in the same range of β-values, these results lead to the independent continuum extrapolations Tcr0=0.7457(45) and Tct0−−√=0.2489(14), with the latter originating from a more convincing fit. Inserting a conversion of r0 from literature (unfortunately with much larger errors) yields Tc/ΛMS¯¯¯¯¯=1.24(10).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Taking Carnap’s classic exposition as a starting point, this paper develops a pragmatic account of the method of explication, defends it against a range of challenges and proposes a detailed recipe for the practice of explicating. It is then argued that confusions are involved in characterizing explications as definitions, and in advocating precising definitions as an alternative to explications. Explication is better characterized as conceptual re-engineering for theoretical purposes, in contrast to conceptual re-engineering for other purposes and improving exactness for purely practical reasons. Finally, three limitations which call for further development of the method of explication are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The difficulties of applying the Hartree-Fock method to many body problems is illustrated by treating Helium's electrons up to the point where tractability vanishes. Second, the problem of applying Hartree-Fock methods to the helium atom's electrons, when they are constrained to remain on a sphere, is revisited. The 6-dimensional total energy operator is reduced to a 2-dimensional one, and the application of that 2-dimensional operator in the Hartree-Fock mode is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. Fluorophotometry is a well validated method for assessing corneal permeability in human subjects. However, with the growing importance of basic science animal research in ophthalmology, fluorophotometry’s use in animals must be further evaluated. The purpose of this study was to evaluate corneal epithelial permeability following desiccating stress using the modified Fluorotron Master™. ^ Methods. Corneal permeability was evaluated prior to and after subjecting 6-8 week old C57BL/6 mice to experimental dry eye (EDE) for 2 and 5 days (n=9/time point). Untreated mice served as controls. Ten microliters of 0.001% sodium fluorescein (NaF) were instilled topically into each mouse’s left eye to create an eye bath, and left to permeate for 3 minutes. The eye bath was followed by a generous wash with Buffered Saline Solution (BSS) and alignment with the Fluorotron Master™. Seven corneal scans using the Fluorotron Master were performed during 15 minutes (1 st post-wash scans), followed by a second wash using BSS and another set of five corneal scans (2nd post-wash scans) during the next 15 minutes. Corneal permeability was calculated using data calculated with the FM™ Mouse software. ^ Results. When comparing the difference between the Post wash #1 scans within the group and the Post wash #2 scans within the group using a repeated measurement design, there was a statistical difference in the corneal fluorescein permeability of the Post-wash #1 scans after 5 days (1160.21±108.26 vs. 1000.47±75.56 ng/mL, P<0.016 for UT-5 day comparison 8 [0.008]), but not after only 2 days of EDE compared to Untreated mice (1115.64±118.94 vs. 1000.47±75.56 ng/mL, P>0.016 for UT-2 day comparison [0.050]). There was no statistical difference between the 2 day and 5 day Post wash #1 scans (P=.299). The Post-wash #2 scans demonstrated that EDE caused a significant NaF retention at both 2 and 5 days of EDE compared to baseline, untreated controls (1017.92±116.25, 1015.40±120.68 vs. 528.22±127.85 ng/mL, P<0.05 [0.0001 for both]). There was no statistical difference between the 2 day and 5 day Post wash #2 scans (P=.503). The comparison between the Untreated post wash #1 with untreated post wash #2 scans using a Paired T-test showed a significant difference between the two sets of scans (P=0.000). There is also a significant difference between the 2 day comparison and the 5 day comparison (P values = 0.010 and 0.002, respectively). ^ Conclusion. Desiccating stress increases permeability of the corneal epithelium to NaF, and increases NaF retention in the corneal stroma. The Fluorotron Master is a useful and sensitive tool to evaluate corneal permeability in murine dry eye, and will be a useful tool to evaluate the effectiveness of dry eye treatments in animal-model drug trials.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Research into methods for recovery from fatigue due to exercise is a popular topic among sport medicine, kinesiology and physical therapy. However, both the quantity and quality of studies and a clear solution of recovery are lacking. An analysis of the statistical methods in the existing literature of performance recovery can enhance the quality of research and provide some guidance for future studies. Methods: A literature review was performed using SCOPUS, SPORTDiscus, MEDLINE, CINAHL, Cochrane Library and Science Citation Index Expanded databases to extract the studies related to performance recovery from exercise of human beings. Original studies and their statistical analysis for recovery methods including Active Recovery, Cryotherapy/Contrast Therapy, Massage Therapy, Diet/Ergogenics, and Rehydration were examined. Results: The review produces a Research Design and Statistical Method Analysis Summary. Conclusion: Research design and statistical methods can be improved by using the guideline from the Research Design and Statistical Method Analysis Summary. This summary table lists the potential issues and suggested solutions, such as, sample size calculation, sports specific and research design issues consideration, population and measure markers selection, statistical methods for different analytical requirements, equality of variance and normality of data, post hoc analyses and effect size calculation.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper assesses the along strike variation of active bedrock fault scarps using long range terrestrial laser scanning (t-LiDAR) data in order to determine the distribution behaviour of scarp height and the subsequently calculate long term throw-rates. Five faults on Cretewhich display spectacular limestone fault scarps have been studied using high resolution digital elevation model (HRDEM) data. We scanned several hundred square metres of the fault system including the footwall, fault scarp and hanging wall of the investigated fault segment. The vertical displacement and the dip of the scarp were extracted every metre along the strike of the detected fault segment based on the processed HRDEM. The scarp variability was analysed by using statistical and morphological methods. The analysis was done in a geographical information system (GIS) environment. Results show a normal distribution for the scanned fault scarp's vertical displacement. Based on these facts, the mean value of height was chosen to define the authentic vertical displacement. Consequently the scarp can be divided into above, below and within the range of mean (within one standard deviation) and quantify the modifications of vertical displacement. Therefore, the fault segment can be subdivided into areas which are influenced by external modification like erosion and sedimentation processes. Moreover, to describe and measure the variability of vertical displacement along strike the fault, the semi-variance was calculated with the variogram method. This method is used to determine how much influence the external processes have had on the vertical displacement. By combining of morphological and statistical results, the fault can be subdivided into areas with high external influences and areas with authentic fault scarps, which have little or no external influences. This subdivision is necessary for long term throw-rate calculations, because without this differentiation the calculated rates would be misleading and the activity of a fault would be incorrectly assessed with significant implications for seismic hazard assessment since fault slip rate data govern the earthquake recurrence. Furthermore, by using this workflow areas with minimal external influences can be determined, not only for throw-rate calculations, but also for determining samples sites for absolute dating techniques such as cosmogenic nuclide dating. The main outcomes of this study include: i) there is no direct correlation between the fault's mean vertical displacement and dip (R² less than 0.31); ii) without subdividing the scanned scarp into areas with differing amounts of external influences, the along strike variability of vertical displacement is ±35%; iii) when the scanned scarp is subdivided the variation of the vertical displacement of the authentic scarp (exposed by earthquakes only) is in a range of ±6% (the varies depending on the fault from 7 to 12%); iv) the calculation of the long term throw-rate (since 13 ka) for four scarps in Crete using the authentic vertical displacement is 0.35 ± 0.04 mm/yr at Kastelli 1, 0.31 ± 0.01 mm/yr at Kastelli 2, 0.85 ± 0.06 mm/yr at the Asomatos fault (Sellia) and 0.55 ± 0.05 mm/yr at the Lastros fault.