987 resultados para Benchmark results
Resumo:
Previous authors have suggested a higher likelihood for industry-sponsored (IS) studies to have positive outcomes than non-IS studies, though the influence of publication bias was believed to be a likely confounder. We attempted to control for the latter using a prepublication database to compare the primary outcome of recent trials based on sponsorship. We used the "advanced search" feature in the clinicaltrials.gov website to identify recently completed phase III studies involving the implementation of a pharmaceutical agent or device for which primary data were available. Studies were categorized as either National Institutes of Health (NIH) sponsored or IS. Results were labeled "favorable" if the results favored the intervention under investigation or "unfavorable" if the intervention fared worse than standard medical treatment. We also performed an independent literature search to identify the cardiovascular trials as a case example and again categorized them into IS versus NIH sponsored. A total of 226 studies sponsored by NIH were found. When these were compared with the latest 226 IS studies, it was found that IS studies were almost 4 times more likely to report a positive outcome (odds ratio [OR] 3.90, 95% confidence interval [CI] 2.6087 to 5.9680, p <0.0001). As a case example of a specialty, we also identified 25 NIH-sponsored and 215 IS cardiovascular trials, with most focusing on hypertension therapy (31.6%) and anticoagulation (17.9%). IS studies were 7 times more likely to report favorable outcomes (OR 7.54, 95% CI 2.19 to 25.94, p = 0.0014). They were also considerably less likely to report unfavorable outcomes (OR 0.11, 95% CI 0.04 to 0.26, p <0.0001). In conclusion, the outcomes of large clinical studies especially cardiovascular differ considerably on the basis of their funding source, and publication bias appears to have limited influence on these findings.
Resumo:
X-ray mammography has been the gold standard for breast imaging for decades, despite the significant limitations posed by the two dimensional (2D) image acquisitions. Difficulty in diagnosing lesions close to the chest wall and axilla, high amount of structural overlap and patient discomfort due to compression are only some of these limitations. To overcome these drawbacks, three dimensional (3D) breast imaging modalities have been developed including dual modality single photon emission computed tomography (SPECT) and computed tomography (CT) systems. This thesis focuses on the development and integration of the next generation of such a device for dedicated breast imaging. The goals of this dissertation work are to: [1] understand and characterize any effects of fully 3-D trajectories on reconstructed image scatter correction, absorbed dose and Hounsifeld Unit accuracy, and [2] design, develop and implement the fully flexible, third generation hybrid SPECT-CT system capable of traversing complex 3D orbits about a pendant breast volume, without interference from the other. Such a system would overcome artifacts resulting from incompletely sampled divergent cone beam imaging schemes and allow imaging closer to the chest wall, which other systems currently under research and development elsewhere cannot achieve.
The dependence of x-ray scatter radiation on object shape, size, material composition and the CT acquisition trajectory, was investigated with a well-established beam stop array (BSA) scatter correction method. While the 2D scatter to primary ratio (SPR) was the main metric used to characterize total system scatter, a new metric called ‘normalized scatter contribution’ was developed to compare the results of scatter correction on 3D reconstructed volumes. Scatter estimation studies were undertaken with a sinusoidal saddle (±15° polar tilt) orbit and a traditional circular (AZOR) orbit. Clinical studies to acquire data for scatter correction were used to evaluate the 2D SPR on a small set of patients scanned with the AZOR orbit. Clinical SPR results showed clear dependence of scatter on breast composition and glandular tissue distribution, otherwise consistent with the overall phantom-based size and density measurements. Additionally, SPR dependence was also observed on the acquisition trajectory where 2D scatter increased with an increase in the polar tilt angle of the system.
The dose delivered by any imaging system is of primary importance from the patient’s point of view, and therefore trajectory related differences in the dose distribution in a target volume were evaluated. Monte Carlo simulations as well as physical measurements using radiochromic film were undertaken using saddle and AZOR orbits. Results illustrated that both orbits deliver comparable dose to the target volume, and only slightly differ in distribution within the volume. Simulations and measurements showed similar results, and all measured dose values were within the standard screening mammography-specific, 6 mGy dose limit, which is used as a benchmark for dose comparisons.
Hounsfield Units (HU) are used clinically in differentiating tissue types in a reconstructed CT image, and therefore the HU accuracy of a system is very important, especially when using non-traditional trajectories. Uniform phantoms filled with various uniform density fluids were used to investigate differences in HU accuracy between saddle and AZOR orbits. Results illustrate the considerably better performance of the saddle orbit, especially close to the chest and nipple region of what would clinically be a pedant breast volume. The AZOR orbit causes shading artifacts near the nipple, due to insufficient sampling, rendering a major portion of the scanned phantom unusable, whereas the saddle orbit performs exceptionally well and provides a tighter distribution of HU values in reconstructed volumes.
Finally, the third generation, fully-suspended SPECT-CT system was designed in and developed in our lab. A novel mechanical method using a linear motor was developed for tilting the CT system. A new x-ray source and a custom made 40 x 30 cm2 detector were integrated on to this system. The SPECT system was nested, in the center of the gantry, orthogonal to the CT source-detector pair. The SPECT system tilts on a goniometer, and the newly developed CT tilting mechanism allows ±15° maximum polar tilting of the CT system. The entire gantry is mounted on a rotation stage, allowing complex arbitrary trajectories for each system, without interference from the other, while having a common field of view. This hybrid system shows potential to be used clinically as a diagnostic tool for dedicated breast imaging.
Resumo:
OBJECTIVE: To assess potential diagnostic and practice barriers to successful management of massive postpartum hemorrhage (PPH), emphasizing recognition and management of contributing coagulation disorders. STUDY DESIGN: A quantitative survey was conducted to assess practice patterns of US obstetrician-gynecologists in managing massive PPH, including assessment of coagulation. RESULTS: Nearly all (98%) of the 50 obstetrician-gynecologists participating in the survey reported having encountered at least one patient with "massive" PPH in the past 5 years. Approximately half (52%) reported having previously discovered an underlying bleeding disorder in a patient with PPH, with disseminated intravascular coagulation (88%, n=23/26) being identified more often than von Willebrand disease (73%, n=19/26). All reported having used methylergonovine and packed red blood cells in managing massive PPH, while 90% reported performing a hysterectomy. A drop in blood pressure and ongoing visible bleeding were the most commonly accepted indications for rechecking a "stat" complete blood count and coagulation studies, respectively, in patients with PPH; however, 4% of respondents reported that they would not routinely order coagulation studies. Forty-two percent reported having never consulted a hematologist for massive PPH. CONCLUSION: The survey findings highlight potential areas for improved practice in managing massive PPH, including earlier and more consistent assessment, monitoring of coagulation studies, and consultation with a hematologist.
Resumo:
CONCLUSION Radiation dose reduction, while saving image quality could be easily implemented with this approach. Furthermore, the availability of a dosimetric data archive provides immediate feedbacks, related to the implemented optimization strategies. Background JCI Standards and European Legislation (EURATOM 59/2013) require the implementation of patient radiation protection programs in diagnostic radiology. Aim of this study is to demonstrate the possibility to reduce patients radiation exposure without decreasing image quality, through a multidisciplinary team (MT), which analyzes dosimetric data of diagnostic examinations. Evaluation Data from CT examinations performed with two different scanners (Siemens DefinitionTM and GE LightSpeed UltraTM) between November and December 2013 are considered. CT scanners are configured to automatically send images to DoseWatch© software, which is able to store output parameters (e.g. kVp, mAs, pitch ) and exposure data (e.g. CTDIvol, DLP, SSDE). Data are analyzed and discussed by a MT composed by Medical Physicists and Radiologists, to identify protocols which show critical dosimetric values, then suggest possible improvement actions to be implemented. Furthermore, the large amount of data available allows to monitor diagnostic protocols currently in use and to identify different statistic populations for each of them. Discussion We identified critical values of average CTDIvol for head and facial bones examinations (respectively 61.8 mGy, 151 scans; 61.6 mGy, 72 scans), performed with the GE LightSpeed CTTM. Statistic analysis allowed us to identify the presence of two different populations for head scan, one of which was only 10% of the total number of scans and corresponded to lower exposure values. The MT adopted this protocol as standard. Moreover, the constant output parameters monitoring allowed us to identify unusual values in facial bones exams, due to changes during maintenance service, which the team promptly suggested to correct. This resulted in a substantial dose saving in CTDIvol average values of approximately 15% and 50% for head and facial bones exams, respectively. Diagnostic image quality was deemed suitable for clinical use by radiologists.
Resumo:
Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally- derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. We illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Results agree well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.
Resumo:
The survey was made available online to library faculty, staff, and student workers. Participation in the survey was completely voluntary, and each individual question was entirely optional. In accordance with UMD policy, responses were treated as confidential. Fewer than five responses in a particular category were considered identifiable by the U.S. Department of Education and were not included in this report. Those who participated in the survey represent a significant portion of the Libraries’ community.
Resumo:
The neutron multidetector DéMoN has been used to investigate the symmetric splitting dynamics in the reactions 58.64Ni + 208Pb with excitation energies ranging from 65 to 186 MeV for the composite system. An analysis based on the new backtracing technique has been applied on the neutron data to determine the two-dimensional correlations between the parent composite system initial thermal energy (EthCN) and the total neutron multiplicity (νtot), and between pre- and post-scission neutron multiplicities (νpre and νpost, respectively). The νpre distribution shape indicates the possible coexistence of fast-fission and fusion-fission for the system 58Ni + 208Pb (Ebeam = 8.86 A MeV). The analysis of the neutron multiplicities in the framework of the combined dynamical statistical model (CDSM) gives a reduced friction coefficient β = 23 ± 2512 × 1021 s-1, above the one-body dissipation limit. The corresponding fission time is τf = 40 ± 4620 × 10-21 s. © 1999 Elsevier Science B.V. All rights reserved.
Resumo:
SCOPUS: ar.j
Resumo:
This work is concerned with the development of a numerical scheme capable of producing accurate simulations of sound propagation in the presence of a mean flow field. The method is based on the concept of variable decomposition, which leads to two separate sets of equations. These equations are the linearised Euler equations and the Reynolds-averaged Navier–Stokes equations. This paper concentrates on the development of numerical schemes for the linearised Euler equations that leads to a computational aeroacoustics (CAA) code. The resulting CAA code is a non-diffusive, time- and space-staggered finite volume code for the acoustic perturbation, and it is validated against analytic results for pure 1D sound propagation and 2D benchmark problems involving sound scattering from a cylindrical obstacle. Predictions are also given for the case of prescribed source sound propagation in a laminar boundary layer as an illustration of the effects of mean convection. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
In this paper the results obtained from the parallelisation of some 3D industrial electromagnetic Finite Element codes within the ESPRIT Europort 2 project PARTEL are presented. The basic guidelines for the parallelisation procedure, based on the Bulk Synchronous Parallel approach, are presented and the encouraging results obtained in terms of speed-up on some selected test cases of practical design significance are outlined and discussed.
Resumo:
Computational modelling of dynamic fluid–structure interaction (DFSI) is a considerable challenge. Our approach to this class of problems involves the use of a single software framework for all the phenomena involved, employing finite volume methods on unstructured meshes in three dimensions. This method enables time and space accurate calculations in a consistent manner. One key application of DFSI simulation is the analysis of the onset of flutter in aircraft wings, where the work of Yates et al. [Measured and Calculated Subsonic and Transonic Flutter Characteristics of a 45° degree Sweptback Wing Planform in Air and Freon-12 in the Langley Transonic Dynamic Tunnel. NASA Technical Note D-1616, 1963] on the AGARD 445.6 wing planform still provides the most comprehensive benchmark data available. This paper presents the results of a significant effort to model the onset of flutter for the AGARD 445.6 wing planform geometry. A series of key issues needs to be addressed for this computational approach. • The advantage of using a single mesh, in order to eliminate numerical problems when applying boundary conditions at the fluid-structure interface, is counteracted by the challenge of generating a suitably high quality mesh in both the fluid and structural domains. • The computational effort for this DFSI procedure, in terms of run time and memory requirements, is very significant. Practical simulations require even finer meshes and shorter time steps, requiring parallel implementation for operation on large, high performance parallel systems. • The consistency and completeness of the AGARD data in the public domain is inadequate for use in the validation of DFSI codes when predicting the onset of flutter.
Resumo:
An unstructured cell-centred finite volume method for modelling viscoelastic flow is presented. The method is applied to the flow through a planar channel and the 4:1 planar contraction for creeping flow of an Oldroyd-B fluid. The results are presented for a range of Weissenberg numbers. In the case of the planar channel results are compared with analytical solutions. For the 4:1 planar contraction benchmark problem the convection terms in the constitutive equations are approximated using both first and second order differencing schemes to compare the techniques and the effect of mesh refinement on the solution is investigated. This is the first time that a fully unstructured, cell-centredfinitevolume technique has been used to model the Oldroyd-B fluid for the test cases presented in this paper.
Resumo:
A numerical scheme for coupling temperature and concentration fields in a general solidification model is presented. A key feature of this scheme is an explicit time stepping used in solving the governing thermal and solute conservation equations. This explicit approach results in a local point-by-point coupling scheme for the temperature and concentration and avoids the multi-level iteration required by implicit time stepping schemes. The proposed scheme is validated by predicting the concentration field in a benchmark solidification problem. Results compare well with an available similarity solution. The simplicity of the proposed explicit scheme allows for the incorporation of complex microscale models into a general solidification model. This is demonstrated by investigating the role of dendrite coarsening on the concentration field in the solidification benchmark problem.
Resumo:
Software metrics are the key tool in software quality management. In this paper, we propose to use support vector machines for regression applied to software metrics to predict software quality. In experiments we compare this method with other regression techniques such as Multivariate Linear Regression, Conjunctive Rule and Locally Weighted Regression. Results on benchmark dataset MIS, using mean absolute error, and correlation coefficient as regression performance measures, indicate that support vector machines regression is a promising technique for software quality prediction. In addition, our investigation of PCA based metrics extraction shows that using the first few Principal Components (PC) we can still get relatively good performance.