876 resultados para Risks Assessment Methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper constitutes a summary of the consensus documents agreed at the First European Workshop on Implant Dentistry University Education held in Prague on 19-22 June 2008. Implant dentistry is becoming increasingly important treatment alternative for the restoration of missing teeth, as patients expectations and demands increase. Furthermore, implant related complications such as peri-implantitis are presenting more frequently in the dental surgery. This consensus paper recommends that implant dentistry should be an integral part of the undergraduate curriculum. Whilst few schools will achieve student competence in the surgical placement of implants this should not preclude the inclusion of the fundamental principles of implant dentistry in the undergraduate curriculum such as the evidence base for their use, indications and contraindications and treatment of the complications that may arise. The consensus paper sets out the rationale for the introduction of implant dentistry in the dental curriculum and the knowledge base for an undergraduate programme in the subject. It lists the competencies that might be sought without expectations of surgical placement of implants at this stage and the assessment methods that might be employed. The paper also addresses the competencies and educational pathways for postgraduate education in implant dentistry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Soil conservation technologies that fit well to local scale and are acceptable to land users are increasingly needed. To achieve this at small-holder farm level, there is a need for an understanding of specific erosion processes and indicators, the land users’ knowledge and their willingness, ability and possibilities to respond to the respective problems to decide on control options. This study was carried out to assess local erosion and performance of earlier introduced conservation terraces from both technological and land users’ points of view. The study was conducted during July to August 2008 at Angereb watershed on 58 farm plots from three selected case-study catchments. Participatory erosion assessment and evaluation were implemented along with direct field measurement procedures. Our focus was to involve the land users in the action research to explore with them the effectiveness of existing conservation measures against the erosion hazard. Terrace characteristics measured and evaluated against the terrace implementation guideline of Hurni (1986). The long-term consequences of seasonal erosion indicators had often not been known and noticed by farmers. The cause and effect relationships of the erosion indicators and conservation measures have shown the limitations and gaps to be addressed towards sustainable erosion control strategies. Less effective erosion control has been observed and participants have believed the gaps are to be the result of lack of landusers’ genuine participation. The results of both local erosion observation and assessment of conservation efficacy using different aspects show the need to promote approaches for erosion evaluation and planning of interventions by the farmers themselves. This paper describes the importance of human factor involving in the empirical erosion assessment methods towards sustainable soil conservation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to determine the reliability of the conditioned pain modulation (CPM) paradigm assessed by an objective electrophysiological method, the nociceptive withdrawal reflex (NWR), and psychophysical measures, using hypothetical sample sizes for future studies as analytical goals. Thirty-four healthy volunteers participated in two identical experimental sessions, separated by 1 to 3 weeks. In each session, the cold pressor test (CPT) was used to induce CPM, and the NWR thresholds, electrical pain detection thresholds and pain intensity ratings after suprathreshold electrical stimulation were assessed before and during CPT. CPM was consistently detected by all methods, and the electrophysiological measures did not introduce additional variation to the assessment. In particular, 99% of the trials resulted in higher NWR thresholds during CPT, with an average increase of 3.4 mA (p<0.001). Similarly, 96% of the trials resulted in higher electrical pain detection thresholds during CPT, with an average increase of 2.2 mA (p<0.001). Pain intensity ratings after suprathreshold electrical stimulation were reduced during CPT in 84% of the trials, displaying an average decrease of 1.5 points in a numeric rating scale (p<0.001). Under these experimental conditions, CPM reliability was acceptable for all assessment methods in terms of sample sizes for potential experiments. The presented results are encouraging with regards to the use of the CPM as an assessment tool in experimental and clinical pain. Trial registration: Clinical Trials.gov NCT01636440.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background. Over half of children in the United States under age five spend 32 hours a week in child care, facilities, where they consume approximately 33-50% of their food intake. ^ Objectives. The aim of this research was to identify the effects of state nutrition policies on provision of food in child care centers. ^ Subjects. Eleven directors or their designee from ten randomly selected licensed child care centers in Travis County, Texas were interviewed. Centers included both nonprofit and for-profit centers, with enrollments ranging from 19 to 82. ^ Methods. Centers were selected using a web-based list of licensed child care providers in the Austin area. One-on-one interviews were conducted in person with center directors using a standard set of questions developed from previous pilot work. Interview items included demographic data, questions about state policies regarding provision of foods in centers, effects of policies on child care center budgets and foods offered, and changes in the provision of food. All interviews were audiotaped and transcribed, and themes were identified using standard qualitative techniques. ^ Results. Four of the centers provided both meals and snacks, four provided snacks only, and two did not provide any food. Directors of centers that provided food were more likely to report adherence to the Minimum Standards than directors of centers that did not. In general, center directors reported that the regulations were loosely enforced. In contrast, center directors were more concerned about a local city-county regulation that required food permits and new standards for kitchens. Most of these local regulations were cost prohibitive and, as a result, centers had changed the types of foods provided, which included providing less fresh produce and more prepackaged items. Although implementation of local regulations had reduced provision of fruits and vegetables to children, no adjustments were reported for allocation of resources, tuition costs or care of the children. ^ Conclusions. Qualitative data from a small sample of child care directors indicate that the implementation and accountability of food- and nutrition-related guidelines for centers is sporadic, uncoordinated, and can have unforeseen effects on the provision of food. A quantitative survey and dietary assessment methods should be conducted to verify these findings in a larger and more representative sample.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Reliability or validity studies are important for the evaluation of measurement error in dietary assessment methods. An approach to validation known as the method of triads uses triangulation techniques to calculate the validity coefficient of a food-frequency questionnaire (FFQ). Objective: To assess the validity of an FFQ estimates of carotenoid and vitamin E intake against serum biomarker measurements and weighed food records (WFRs), by applying the method of triads. Design: The study population was a sub-sample of adult participants in a randomised controlled trial of beta-carotene and sunscreen in the prevention of skin cancer. Dietary intake was assessed by a self-administered FFQ and a WFR. Nonfasting blood samples were collected and plasma analysed for five carotenoids (alpha-carotene, beta-carotene, beta-cryptoxanthin, lutein, lycopene) and vitamin E. Correlation coefficients were calculated between each of the dietary methods and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficients were estimated using bootstrap sampling. Results: The validity coefficients of the FFQ were highest for alpha-carotene (0.85) and lycopene (0.62), followed by beta- carotene (0.55) and total carotenoids (0.55), while the lowest validity coefficient was for lutein (0.19). The method of triads could not be used for b- cryptoxanthin and vitamin E, as one of the three underlying correlations was negative. Conclusions: Results were similar to other studies of validity using biomarkers and the method of triads. For many dietary factors, the upper limit of the validity coefficients was less than 0.5 and therefore only strong relationships between dietary exposure and disease will be detected.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background and Objectives: This paper reports on historical changes in assessment culminating in the experience of one discipline with negotiated student feedback that has helped design and modify assessment to cater for the requirements of both students and teachers. The standard of assessment required to pass Obstetrics and Gynaecology in the four year graduate entry program in the School of Medicine at The University of Queensland, Brisbane, Australia has become less formalised and more collaborative. Changes in assessment in this discipline over the last 20 years reflect the development of an understanding of the educational principles associated with adult teaching and learning. Assessment has evolved from being teacher focussed, with questionable reliability, validity, and emphasis on outcomes, to being focussed on learning and the student. Multiplechoice examinations, combined with a collaborative approach to the reliability and validity of questions and answers and a debrief or feedback session have been found to provide an assessment format that is art acceptable measure oflearning for both teachers and students. Changes in assessment reflect a collaborative process between teachers and students based on principles of adult learning and involving negotiated student feedback. Our experience with this form of negotiated outcome for assessment is presented together with suggestions for improvement and is contrasted with assessment methods used in this department over the last 20 years. Change and refinement will continue as medical programs strive to meet the learning needs of students and assessment outcomes that are acceptable to its teachers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: This pilot study explored the feasibility and effectiveness of an Internet-based telerehabilitation application for the assessment of motor speech disorders in adults with acquired neurological impairment. Method: Using a counterbalanced, repeated measures research design, 2 speech-language pathologists assessed 19 speakers with dysarthria on a battery of perceptual assessments. The assessments included a 19-item version of the Frenchay Dysarthria Assessment (FDA; P. Enderby, 1983), the Assessment of Intelligibility of Dysarthric Speech (K. M. Yorkston & D. R. Beukelman, 1981), perceptual analysis of a speech sample, and an overall rating of severity of the dysarthria. One assessment was conducted in the traditional face-to-face manner, whereas the other assessment was conducted using an online, custom-built telerehabilitation application. This application enabled real-time videoconferencing at 128 kb/s and the transfer of store-and-forward audio and video data between the speaker and speech-language pathologist sites. The assessment methods were compared using the J.M.Bland and D.G.Altman (1986, 1999) limits-of-agreement method and percentage level of agreement between the 2 methods. Results: Measurements of severity of dysarthria, percentage intelligibility in sentences, and most perceptual ratings made in the telerehabilitation environment were found to fall within the clinically acceptable criteria. However, several ratings on the FDA were not comparable between the environments, and explanations for these results were explored. Conclusions: The online assessment of motor speech disorders using an Internet-based telerehabilitation system is feasible. This study suggests that with additional refinement of the technology and assessment protocols, reliable assessment of motor speech disorders over the Internet is possible. Future research methods are outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE. To establish an alternative method, sequential and diameter response analysis (SDRA), to determine dynamic retinal vessel responses and their time course in serial stimulation compared with the established method of averaged diameter responses and standard static assessment. METHODS. SDRA focuses on individual time and diameter responses, taking into account the fluctuation in baseline diameter, providing improved insight into reaction patterns when compared with established methods as delivered by retinal vessel analyzer (RVA) software. SDRA patterns were developed with measurements from 78 healthy nonsmokers and subsequently validated in a group of 21 otherwise healthy smokers. Fundus photography and retinal vessel responses were assessed by RVA, intraocular pressure by contact tonometry, and blood pressure by sphygmomanometry. RESULTS. Compared with the RVA software method, SDRA demonstrated a marked difference in retinal vessel responses to flickering light (P 0.05). As a validation of that finding, SDRA showed a strong relation between baseline retinal vessel diameter and subsequent dilatory response in both healthy subjects and smokers (P 0.001). The RVA software was unable to detect this difference or to find a difference in retinal vessel arteriovenous ratio between smokers and nonsmokers (P 0.243). However, SDRA revealed that smokers’ vessels showed both an increased level of arterial baseline diameter fluctuation before flicker stimulation (P 0.005) and an increased stiffness of retinal arterioles (P 0.035) compared with those in nonsmokers. These differences were unrelated to intraocular pressure or systemic blood pressure. CONCLUSIONS. SDRA shows promise as a tool for the assessment of vessel physiology. Further studies are needed to explore its application in patients with vascular diseases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports findings of a two year study concerning the development and implementation of a general-purpose computer-based assessment (CBA) system at a UK University. Data gathering took place over a period of nineteen months, involving a number of formative and summative assessments. Approximately 1,000 students, drawn from undergraduate courses, were involved in the exercise. The techniques used in gathering data included questionnaires, observation, interviews and an analysis of student scores in both conventional examinations and computer-based assessments. Comparisons with conventional assessment methods suggest that the use of CBA techniques may improve the overall performance of students. However it is clear that the technique must not be seen as a "quick fix" for problems such as rising student numbers. If one accepts that current systems test only a relatively narrow range of skills, then the hasty implementation of CBA systems will result in a distorted and inaccurate view of student performance. In turn, this may serve to reduce the overall quality of courses and - ultimately - detract from the student learning experience. On the other hand, if one adopts a considered and methodical approach to computer-based assessment, positive benefits might include increased efficiency and quality, leading to improved student learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Risk management in healthcare represents a group of various complex actions, implemented to improve the quality of healthcare services and guarantee the patients safety. Risks cannot be eliminated, but it can be controlled with different risk assessment methods derived from industrial applications and among these the Failure Mode Effect and Criticality Analysis (FMECA) is a largely used methodology. The main purpose of this work is the analysis of failure modes of the Home Care (HC) service provided by local healthcare unit of Naples (ASL NA1) to focus attention on human and non human factors according to the organization framework selected by WHO. © Springer International Publishing Switzerland 2014.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim: To explore how pregnant women experience fetal movements in late pregnancy. Specific aims were:  to study women’s experiences during the time prior to receiving news that their unborn baby had died in utero (I), to investigate women’s descriptions of fetal movements (II), investigate the association between the magnitude of fetal movements and level of prenatal attachment (III), and to study women’s experiences using two different self-assessment methods (IV). Methods: Interviews, questionnaires, and observations were used. Results: Premonition that something had happened to their unborn baby, based on a lack of fetal movements, was experienced by the participants. The overall theme “something is wrong” describes the women’s insight that the baby’s life was threatened (I). Fetal movements that were sorted into the domain “powerful movements” were perceived in late pregnancy by 96 % of the participants (II). Perceiving frequent fetal movements on at least three occasions per 24 hours was associated with higher scores of prenatal attachment in all the three subscales on PAI-R. The majority (55%) of the 456 participants reported average occasions of frequent fetal movements, 26% several occasions and 18% reported few occasions of frequent fetal movements, during the current gestational week.  (III). Only one of the 40 participants did not find at least one method for monitoring fetal movements suitable. Fifteen of the 39 participants reported a preference for the mindfetalness method and five for the count-to-ten method. The women described the observation of the movements as a safe and reassuring moment for communication with their unborn baby (IV). Conclusion:  In full-term and uncomplicated pregnancies, women usually perceive fetal movements as powerful. Furthermore, women in late pregnancy who reported frequent fetal movements on several occasions during a 24-hour period seem to have a high level of prenatal attachment. Women who used self-assessment methods for monitoring fetal movements felt calm and relaxed when observing the movements of their babies. They had a high compliance for both self-assessment methods. Women that had experienced a stillbirth in late pregnancy described that they had a premonition before they were told that their baby had died in utero. 

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Evidence suggests that health benefits are associated with consuming recommended amounts of fruits and vegetables (F&V), yet standardised assessment methods to measure F&V intake are lacking. The current review aims to identify methods to assess F&V intake among children and adults in pan-European studies and inform the development of the DEDIPAC (DEterminants of DIet and Physical Activity) toolbox of methods suitable for use in future European studies. A literature search was conducted using three electronic databases and by hand-searching reference lists. English-language studies of any design which assessed F&V intake were included in the review. Studies involving two or more European countries were included in the review. Healthy, free-living children or adults. The review identified fifty-one pan-European studies which assessed F&V intake. The FFQ was the most commonly used (n 42), followed by 24 h recall (n 11) and diet records/diet history (n 7). Differences existed between the identified methods; for example, the number of F&V items on the FFQ and whether potatoes/legumes were classified as vegetables. In total, eight validated instruments were identified which assessed F&V intake among adults, adolescents or children. The current review indicates that an agreed classification of F&V is needed in order to standardise intake data more effectively between European countries. Validated methods used in pan-European populations encompassing a range of European regions were identified. These methods should be considered for use by future studies focused on evaluating intake of F&V.