948 resultados para diagnostic technique and procedure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. We describe the development, reliability and applications of the Diagnostic Interview for Psychoses (DIP), a comprehensive interview schedule for psychotic disorders. Method. The DIP is intended for use by interviewers with a clinical background and was designed to occupy the middle ground between fully structured, lay-administered schedules, and semi-structured., psychiatrist-administered interviews. It encompasses four main domains: (a) demographic data; (b) social functioning and disability; (c) a diagnostic module comprising symptoms, signs and past history ratings; and (d) patterns of service utilization Lind patient-perceived need for services. It generates diagnoses according to several sets of criteria using the OPCRIT computerized diagnostic algorithm and can be administered either on-screen or in a hard-copy format. Results. The DIP proved easy to use and was well accepted in the field. For the diagnostic module, inter-rater reliability was assessed on 20 cases rated by 24 clinicians: good reliability was demonstrated for both ICD-10 and DSM-III-R diagnoses. Seven cases were interviewed 2-11 weeks apart to determine test-retest reliability, with pairwise agreement of 0.8-1.0 for most items. Diagnostic validity was assessed in 10 cases, interviewed with the DIP and using the SCAN as 'gold standard': in nine cases clinical diagnoses were in agreement. Conclusions. The DIP is suitable for use in large-scale epidemiological studies of psychotic disorders. as well as in smaller Studies where time is at a premium. While the diagnostic module stands on its own, the full DIP schedule, covering demography, social functioning and service utilization makes it a versatile multi-purpose tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problems of computing the power and exponential moments EXs and EetX of square Gaussian random matrices X=A+BWC for positive integer s and real t, where W is a standard normal random vector and A, B, C are appropriately dimensioned constant matrices. We solve the problems by a matrix product scalarization technique and interpret the solutions in system-theoretic terms. The results of the paper are applicable to Bayesian prediction in multivariate autoregressive time series and mean-reverting diffusion processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distortion or deprivation of vision during an early `critical' period of visual development can result in permanent visual impairment which indicates the need to identify and treat visually at-risk individuals early. A significant difficulty in this respect is that conventional, subjective methods of visual acuity determination are ineffective before approximately three years of age. In laboratory studies, infant visual function has been quantified precisely, using objective methods based on visual evoked potentials (VEP), preferential looking (PL) and optokinetic nystagmus (OKN) but clinical assessment of infant vision has presented a particular difficulty. An initial aim of this study was to evaluate the relative clinical merits of the three techniques. Clinical derivatives were devised, the OKN method proved unsuitable but the PL and VEP methods were evaluated in a pilot study. Most infants participating in the study had known ocular and/or neurological abnormalities but a few normals were included for comparison. The study suggested that the PL method was more clinically appropriate for the objective assessment of infant acuity. A study of normal visual development from birth to one year was subsequently conducted. Observations included cycloplegic refraction, ophthalmoscopy and preferential looking visual acuity assessment using horizontally and vertically oriented square wave gratings. The aims of the work were to investigate the efficiency and sensitivity of the technique and to study possible correlates of visual development. The success rate of the PL method varied with age; 87% of newborns and 98% of infants attending follow-up successfully completed at least one acuity test. Below two months monocular acuities were difficult to secure; infants were most testable around six months. The results produced were similar to published data using the acuity card procedure and slightly lower than, but comparable with acuity data derived using extended PL methods. Acuity development was not impaired in infants found to have retinal haemorrhages as newborns. A significant relationship was found between newborn binocular acuity and anisometropia but not with other refractive findings. No strong or consistent correlations between grating acuity and refraction were found for three, six or twelve months olds. Improvements in acuity and decreases in levels of hyperopia over the first week of life were suggestive of recovery from minor birth trauma. The refractive data was analysed separately to investigate the natural history of refraction in normal infants. Most newborns (80%) were hyperopic, significant astigmatism was found in 86% and significant anisometropia in 22%. No significant alteration in spherical equivalent refraction was noted between birth and three months, a significant reduction in hyperopia was evident by six months and this trend continued until one year. Observations on the astigmatic component of the refractive error revealed a rather erratic series of changes which would be worthy of further investigation since a repeat refraction study suggested difficulties in obtaining stable measurements in newborns. Astigmatism tended to decrease between birth and three months, increased significantly from three to six months and decreased significantly from six to twelve months. A constant decrease in the degree of anisometropia was evident throughout the first year. These findings have implications for the correction of infantile refractive error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Algae are a new potential biomass for energy production but there is limited information on their pyrolysis and kinetics. The main aim of this thesis is to investigate the pyrolytic behaviour and kinetics of Chlorella vulgaris, a green microalga. Under pyrolysis conditions, these microalgae show their comparable capabilities to terrestrial biomass for energy and chemicals production. Also, the evidence from a preliminary pyrolysis by the intermediate pilot-scale reactor supports the applicability of these microalgae in the existing pyrolysis reactor. Thermal decomposition of Chlorella vulgaris occurs in a wide range of temperature (200-550°C) with multi-step reactions. To evaluate the kinetic parameters of their pyrolysis process, two approaches which are isothermal and non-isothermal experiments are applied in this work. New developed Pyrolysis-Mass Spectrometry (Py-MS) technique has the potential for isothermal measurements with a short run time and small sample size requirement. The equipment and procedure are assessed by the kinetic evaluation of thermal decomposition of polyethylene and lignocellulosic derived materials (cellulose, hemicellulose, and lignin). In the case of non-isothermal experiment, Thermogravimetry- Mass Spectrometry (TG-MS) technique is used in this work. Evolved gas analysis provides the information on the evolution of volatiles and these data lead to a multi-component model. Triplet kinetic values (apparent activation energy, pre-exponential factor, and apparent reaction order) from isothermal experiment are 57 (kJ/mol), 5.32 (logA, min-1), 1.21-1.45; 9 (kJ/mol), 1.75 (logA, min-1), 1.45 and 40 (kJ/mol), 3.88 (logA, min-1), 1.45- 1.15 for low, middle and high temperature region, respectively. The kinetic parameters from non-isothermal experiment are varied depending on the different fractions in algal biomass when the range of apparent activation energies are 73-207 (kJ/mol); pre-exponential factor are 5-16 (logA, min-1); and apparent reaction orders are 1.32–2.00. The kinetic procedures reported in this thesis are able to be applied to other kinds of biomass and algae for future works.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Albumin in tears is used as a diagnostic marker of ocular insult and inflammation, but whether its presence in tears is responsive or part of an adaptive reaction remains unresolved. A review of the literature on tear albumin concentration emphasizes that variables such as collection method, stimulus, assay technique, and disease state influence the quoted values to different extents. Influence of assay technique is negligible in comparison to variation in sampling conditions. Ocular disease increases albumin concentrations but not in a specific manner. The literature review also highlighted that little systematic research has been carried out on the daily cycle of tear albumin levels. In order to remedy this shortcoming, we investigated variations in tear albumin concentration during the waking day. The concentration of albumin in 400 tear samples collected from 13 subjects was assessed at 2-hourly intervals throughout the waking day. Highest daytime albumin concentrations were obtained within 10 minutes of waking, with a mean concentration of >50 ± 22 µg/ml. Albumin levels were at their lowest, but most consistent, 2-6 hours post-waking. This pattern was followed by a progressive increase in albumin concentration during the latter part of the day. Although individual subject-to-subject concentration differences were observed, this distinctive pattern of diurnal variation was found in all subjects. The results presented suggest a regulated, not random, pattern of variation within the period of study. © 2013 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significant improvements have been made in estimating gross primary production (GPP), ecosystem respiration (R), and net ecosystem production (NEP) from diel, “free-water” changes in dissolved oxygen (DO). Here we evaluate some of the assumptions and uncertainties that are still embedded in the technique and provide guidelines on how to estimate reliable metabolic rates from high-frequency sonde data. True whole-system estimates are often not obtained because measurements reflect an unknown zone of influence which varies over space and time. A minimum logging frequency of 30 min was sufficient to capture metabolism at the daily time scale. Higher sampling frequencies capture additional pattern in the DO data, primarily related to physical mixing. Causes behind the often large daily variability are discussed and evaluated for an oligotrophic and a eutrophic lake. Despite a 3-fold higher day-to-day variability in absolute GPP rates in the eutrophic lake, both lakes required at least 3 sonde days per week for GPP estimates to be within 20% of the weekly average. A sensitivity analysis evaluated uncertainties associated with DO measurements, piston velocity (k), and the assumption that daytime R equals nighttime R. In low productivity lakes, uncertainty in DO measurements and piston velocity strongly impacts R but has no effect on GPP or NEP. Lack of accounting for higher R during the day underestimates R and GPP but has no effect on NEP. We finally provide suggestions for future research to improve the technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background:

Knowing the scope of neurosurgical disease at Mbarara Hospital is critical for infrastructure planning, education and training. In this study, we aim to evaluate the neurosurgical outcomes and identify predictors of mortality in order to potentiate platforms for more effective interventions and inform future research efforts at Mbarara Hospital.

Methods:

This is retrospective chart review including patients of all ages with a neurosurgical disease or injury presenting to Mbarara Regional Referral Hospital (MRRH) between January 2012 to September 2015. Descriptive statistics were presented. A univariate analysis was used to obtain the odds ratios of mortality and 95% confidence intervals. Predictors of mortality were determined using multivariate logistic regression model.

Results:

A total of 1876 charts were reviewed. Of these, 1854 (had complete data and were?) were included in the analysis. The overall mortality rate was 12.75%; the mortality rates among all persons who underwent a neurosurgical procedure was 9.72%, and was 13.68% among those who did not undergo a neurosurgical procedure. Over 50% of patients were between 19 and 40 years old and the majority of were males (76.10%). The overall median length of stay was 5 days. Of all neurosurgical admissions, 87% were trauma patients. In comparison to mild head injury, closed head injury and intracranial hematoma patients were 5 (95% CI: 3.77, 8.26) and 2.5 times (95% CI: 1.64,3.98) more likely to die respectively. Procedure and diagnostic imaging were independent negative predictors of mortality (P <0.05). While age, ICU admission, admission GCS were positive predictors of mortality (P <0.05).

Conclusions:

The majority of hospital admissions were TBI patients, with RTIs being the most common mechanism of injury. Age, ICU admission, admission GCS, diagnostic imaging and undergoing surgery were independent predictors of mortality. Going forward, further exploration of patient characteristics is necessary to fully describe mortality outcomes and implement resource appropriate interventions that ultimately improve morbidity and mortality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.

The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.

Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.

An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.

The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To use a unique multicomponent administrative data set assembled at a large academic teaching hospital to examine the risk of percutaneous blood and body fluid (BBF) exposures occurring in operating rooms. DESIGN A 10-year retrospective cohort design. SETTING A single large academic teaching hospital. PARTICIPANTS All surgical procedures (n=333,073) performed in 2001-2010 as well as 2,113 reported BBF exposures were analyzed. METHODS Crude exposure rates were calculated; Poisson regression was used to analyze risk factors and account for procedure duration. BBF exposures involving suture needles were examined separately from those involving other device types to examine possible differences in risk factors. RESULTS The overall rate of reported BBF exposures was 6.3 per 1,000 surgical procedures (2.9 per 1,000 surgical hours). BBF exposure rates increased with estimated patient blood loss (17.7 exposures per 1,000 procedures with 501-1,000 cc blood loss and 26.4 exposures per 1,000 procedures with >1,000 cc blood loss), number of personnel working in the surgical field during the procedure (34.4 exposures per 1,000 procedures having ≥15 personnel ever in the field), and procedure duration (14.3 exposures per 1,000 procedures lasting 4 to <6 hours, 27.1 exposures per 1,000 procedures lasting ≥6 hours). Regression results showed associations were generally stronger for suture needle-related exposures. CONCLUSIONS Results largely support other studies found in the literature. However, additional research should investigate differences in risk factors for BBF exposures associated with suture needles and those associated with all other device types. Infect. Control Hosp. Epidemiol. 2015;37(1):80-87.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The absence of rapid, low cost and highly sensitive biodetection platform has hindered the implementation of next generation cheap and early stage clinical or home based point-of-care diagnostics. Label-free optical biosensing with high sensitivity, throughput, compactness, and low cost, plays an important role to resolve these diagnostic challenges and pushes the detection limit down to single molecule. Optical nanostructures, specifically the resonant waveguide grating (RWG) and nano-ribbon cavity based biodetection are promising in this context. The main element of this dissertation is design, fabrication and characterization of RWG sensors for different spectral regions (e.g. visible, near infrared) for use in label-free optical biosensing and also to explore different RWG parameters to maximize sensitivity and increase detection accuracy. Design and fabrication of the waveguide embedded resonant nano-cavity are also studied. Multi-parametric analyses were done using customized optical simulator to understand the operational principle of these sensors and more important the relationship between the physical design parameters and sensor sensitivities. Silicon nitride (SixNy) is a useful waveguide material because of its wide transparency across the whole infrared, visible and part of UV spectrum, and comparatively higher refractive index than glass substrate. SixNy based RWGs on glass substrate are designed and fabricated applying both electron beam lithography and low cost nano-imprint lithography techniques. A Chromium hard mask aided nano-fabrication technique is developed for making very high aspect ratio optical nano-structure on glass substrate. An aspect ratio of 10 for very narrow (~60 nm wide) grating lines is achieved which is the highest presented so far. The fabricated RWG sensors are characterized for both bulk (183.3 nm/RIU) and surface sensitivity (0.21nm/nm-layer), and then used for successful detection of Immunoglobulin-G (IgG) antibodies and antigen (~1μg/ml) both in buffer and serum. Widely used optical biosensors like surface plasmon resonance and optical microcavities are limited in the separation of bulk response from the surface binding events which is crucial for ultralow biosensing application with thermal or other perturbations. A RWG based dual resonance approach is proposed and verified by controlled experiments for separating the response of bulk and surface sensitivity. The dual resonance approach gives sensitivity ratio of 9.4 whereas the competitive polarization based approach can offer only 2.5. The improved performance of the dual resonance approach would help reducing probability of false reading in precise bio-assay experiments where thermal variations are probable like portable diagnostics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: in 21st century, endoscopic study of the small intestine has undergone a revolution with capsule endoscopy and balloon-assisted enteroscopy. The difficulties and morbidity associated with intraoperative enteroscopy, the gold-standard in the 20th century, made this technique to be relegated to a second level. AIMS: evaluate the actual role and assess the diagnostic and therapeutic value of intraoperative enteroscopy in patients with obscure gastrointestinal bleeding. PATIENTS AND METHODS: we conducted a retrospective study of 19 patients (11 males; mean age: 66.5 ± 15.3 years) submitted to 21 IOE procedures for obscure GI bleeding. Capsule endoscopy and double balloon enteroscopy had been performed in 10 and 5 patients, respectively. RESULTS: with intraoperative enteroscopy a small bowel bleeding lesion was identified in 79% of patients and a gastrointestinal bleeding lesion in 94%. Small bowel findings included: angiodysplasia (n = 6), ulcers (n = 4), small bowel Dieulafoy´s lesion (n = 2), bleeding from anastomotic vessels (n = 1), multiple cavernous hemangiomas (n = 1) and bleeding ectopic jejunal varices (n = 1). Agreement between capsule endoscopy and intraoperative enteroscopy was 70%. Endoscopic and/or surgical treatment was used in 77.8% of the patients with a positive finding on intraoperative enteroscopy, with a rebleeding rate of 21.4% in a mean 21-month follow-up period. Procedure-related mortality and postoperative complications have been 5 and 21%, respectively. CONCLUSIONS: intraoperative enteroscopy remains a valuable tool in selected patients with obscure GI bleeding, achieving a high diagnostic yield and allowing an endoscopic and/or surgical treatment in most of them. However, as an invasive procedure with relevant mortality and morbidity, a precise indication for its use is indispensable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background False-positive blood cultures findings may lead to a falsely increased morbidity and increased hospital costs. Method The survey was conducted as retrospective - prospective study and included 239 preterm infants (born before 37 weeks of gestation) who were treated in Neonatal Intensive Care Unit (NICU) in Institute for Child and Youth Health Care of Vojvodina during one year (January 1st, 2012 to December 31st, 2012). The retrospective part of the study focused on examination of incidence of neonatal sepsis and determination of risk factors. In the prospective part of the study infants were sub-divided into two groups: Group 1- infants hospitalized in NICU during the first 6 months of the study; blood cultures were taken by the ‘’clean technique’’ and checklists for this procedure were not taken. Group 2- neonates hospitalized in NICU during last 6 months of the study; blood cultures were taken by ‘’sterile technique’’ and checklists for this procedure were taken. Results The main risk factors for sepsis were prelabor rupture of membranes, low gestational age, low birth weight, mechanical ventilation, umbilical venous catheter placement, and abdominal drainage. Staphylococcus aureus and coagulase negative Staphylococcus were the most frequently isolated microorganisms in false-positive blood samples. Conclusions Education of employees, use of checklists and sterile sets for blood sampling, permanent control of false positive blood cultures, as well as regular and routine monthly reports are crucial for successful reduction of contamination rates.