900 resultados para Task based language learning
Resumo:
Topic management by non-native speakers (NNSs) during informal conversations has received comparatively little attention from researchers, and receives surprisingly little attention in second language learning and teaching. This article reports on one of the topic management strategies employed by international students during informal, social interactions with native-speaker peers, exploring the process of maintaining topic continuity following temporary suspensions of topics. The concept of side sequences is employed to illustrate the nature of different types of topic suspension, as well as the process of jointly negotiating a return to the topic. Extracts from the conversations show that such sequences were not exclusively occasioned by language difficulties, and that the non-native speaker participants were able to effect successful returns to the main topic of the conversations.
Resumo:
The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.
Resumo:
The introduction of languages, especially English, into the primary curriculum around the world has been one of the major language-in-education policy developments in recent years. In countries where English has been compulsory for a number of years, the question arises as to what extent the numerous and well-documented challenges faced by the initial implementation of early language learning policies have been overcome and whether new challenges have arisen as policies have become consolidated. This article therefore focuses on South Korea, where English has been compulsory in primary school since 1997. The issues raised by the introduction of English into the primary curriculum are reviewed and the current situation in South Korea is investigated. The results of a mixed methods study using survey data from 125 Korean primary school teachers, together with data from a small-scale case study of one teacher are presented. The study shows that, while some of the initial problems caused by the introduction of early language learning appear to have been addressed, other challenges persist. Moreover, the data reveal the emergence of a number of new challenges faced by primary school teachers of English as they seek to implement government policy. © 2013 © 2013 Taylor & Francis.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
The introduction of languages into the primary curriculum has been the major development in language-in-education policy around the world over the last 20-25 years. In the vast majority of countries the language taught is English and it is being taught at an ever-earlier age. A relatively large amount of research has been carried out in Asia into teaching English to young learners (TEYL) from the point of view of language policy and planning and of policy implementation, especially in terms of the gap between policy and practice caused by the introduction of new methodologies such as communicative language teaching. However, to date far less research has been carried out into the situation in Europe, particularly concerning the attitudes of those most closely involved in policy implementation - the teachers themselves. This chapter examines the attitudes of teachers in six European countries (Italy, Latvia, Macedonia, Poland, Spain and Ukraine), uncovering the challenges they face and the changes they would like to see enacted to improve English language learning and teaching in their countries. The implications for policy, planning and teacher education are also discussed.
Resumo:
Wikis are quickly emerging as a new corporate medium for communication and collaboration. They allow dispersed groups of collaborators to asynchronously engage in persistent conversations, the result of which is stored on a common server as a single, shared truth. To gauge the enterprise value of wikis, the authors draw on Media Choice Theories (MCTs) as an evaluation framework. MCTs reveal core capabilities of communication media and their fit with the communication task. Based on the evaluation, the authors argue that wikis are equivalent or superior to existing asynchronous communication media in key characteristics. Additionally argued is the notion that wiki technology challenges some of the held beliefs of existing media choice theories, as wikis introduce media characteristics not previously envisioned. The authors thus predict a promising future for wiki use in enterprises.
Resumo:
This study had two purposes: (a) to develop a theoretical framework integrating and synthesizing findings of prior research regarding stress and burnout among critical care nurses (CCRNs), and (b) to validate the theoretical framework with an empirical study to assure a theory/research based teaching-learning process for graduate courses preparing nursing clinical specialists and administrators.^ The methods used to test the theoretical framework included: (a) adopting instruments with reported validity, (b) conducting a pilot study, (c) revising instruments using results of the pilot study and following concurrence of a panel of experts, and (d) establishing correlations within predetermined parameters. The reliability of the tool was determined through the use of Cronbach's Alpha Coefficient with a resulting range from.68 to.88 for all measures.^ The findings supported all the research hypotheses. Correlations were established at r =.23 for statistically significant alphas at the.01 level and r =.16 for alphas.05. The conclusions indicated three areas of strong correlation among the theoretical variables: (a) work environment stressor antecedents and specific stressor events were correlated significantly with subjective work stress and burnout; (b) subjective work stress (perceived work related stress) was a function of the work environment stressor antecedents and specific stressor events, and (c) emotional exhaustion, the first phase of burnout, was confirmed to be related to stressor antecedents and specific stressor events. This dimension was found to be a function of the work environment stressor antecedents, modified by the individual characteristics of work and non-work related social support, non-work daily stress, and the number of hours worked per week. The implications of the study for nursing graduate curricula, nursing practice and nursing education were discussed. Recommendations for further research were enumerated. ^
Resumo:
The role that gender plays with respect to language learning in the classroom is ripe for investigation. Some educators and researchers maintain that females possess superior language skills. This author argues that ideas regarding female language superiority are suspect and may encourage discriminatory pedagogy for women as well as men.
Resumo:
Graph Reduction Machines, are a traditional technique for implementing functional programming languages. They allow to run programs by transforming graphs by the successive application of reduction rules. Web service composition enables the creation of new web services from existing ones. BPEL is a workflow-based language for creating web service compositions. It is also the industrial and academic standard for this kind of languages. As it is designed to compose web services, the use of BPEL in a scenario where multiple technologies need to be used is problematic: when operations other than web services need to be performed to implement the business logic of a company, part of the work is done on an ad hoc basis. To allow heterogeneous operations to be part of the same workflow, may help to improve the implementation of business processes in a principled way. This work uses a simple variation of the BPEL language for creating compositions containing not only web service operations but also big data tasks or user-defined operations. We define an extensible graph reduction machine that allows the evaluation of BPEL programs and implement this machine as proof of concept. We present some experimental results.
Resumo:
We live in a world inherently influenced by technology and in which education is immersed in realities made possible by the support of digital technologies, such as electronic mobile devices. Thus, the general aim of this study lies in mapping and analysing the influence of mobile devices on teaching, especially with reference to learning the English language. The specific aims are to investigate how the use of mobile devices is present in the research participants’ practices, consider whether such use is beneficial, according to the students, to the English language learning as well as mapping how the use of mobile devices favours the normalisation stage, taken in this research as a complex process.The theoretical background of this study includes the premises of the Paradigm of Complexity, especially concerning the acquisition of a second language, as well as the precepts of Normalisation, which is related to the total integration of digital technologies into the English teaching and learning process in such a way that they become invisible, and the theories of language learning mediated by computers and mobile devices. Methodologically, this is an ethnographic qualitative research and its context is a language institute located in the Triângulo Mineiro region. In addition to students from five groups in the institution, two teachers and an administrative assistant participated in the survey. Data was collected through an online questionnaire, learning reports produced by students and interviews with teachers and administrative staff. The analyses indicate that mobile devices are present in the daily practices of English learners, but these uses, in most cases, are carried out through the teacher's encouragement. Moreover, despite having positive sayings on the role of digital technologies in the process of English teaching and learning, there is, among students and teachers, a dichotomy between saying and doing about the learning contexts considered valid. Additionally, the use of mobile devices in the English learning process is not yet established as a normalised issue because the process of integration of technology in teaching is still ruled by traditional uses of the technology. I conclude that the use of mobile devices in the English learning process is still not normalised, because even if students use their mobile devices every day, they generally do not realize the affordances of such use as possibilities to learn English.
Resumo:
The intervention research proposed was based on the Cultural-Historical Theory based on the laws and logic of materialism historical-dialectical. Therefore, we tried to design a research process that involved all as responsible for the process. In the field of continuous teacher's training usually has been found dualistic relationship / paradoxical processes as a result of the adopted training models which are characterized by individualist human processes. The teacher training work sought to overcome this dualism, to promote the unveiling of the contradictions with regard to teaching models. As a hypothesis, we imagined that immersed in this process, teachers recognize such contradictions, and this recognition would make the contradictions become the driving force of change in teaching practice, realizing the teaching-learning-development triad as the basis of praxis. Aiming to develop a process of continuing education to bring results to the professional teachers development looking for answer the following research question: How and what the changes of teachers who participated in the Didactic-Formative Intervention process raised the quality of their teaching practices? In this context, the objective of the research was to develop a process of Didactic-Formative Intervention from the perspective of Cultural-Historical Theory with high school teachers in order to theorize about the changes in pedagogical practices of teachers and learn aspects that transform the essence teaching practice. The research involved two high school teachers of a public school in Uberlândia-MG. The training meetings took place at the school through a collective study group between the years 2013 and 2015. As procedures were used two interconnected aspects: classes observations, and a theoretical and methodological training, both for diagnosis and for the process evaluation, the second aspect has a formative dimension, and a didactic dimension (double meaning) to form didactically the teacher and to elaborate didactic procedures. The collected data were analyzed by observing the assumptions of the method, analysis by units and the processuality. As results teachers showed changes in their teaching practices regarding the organization of the pedagogical work and also centered their design educational actions based on learning and development of the students. The presence of continuous diagnosis during the classes, work with a systems of concepts and their conceptual links, problematization as a teaching method can be pointed as meaningful changes in their praxis. Regarding the training activities that emerged from the analysis of the compiled materials and analyzed throughout the process can be emphasized: forming a collective group of school teachers continuous training, diagnostics, development of practical activities, increase relationships among participants, the choice of scientific material used should have direct relation to the needs of the participants, promoting conditions that enable the emergence of contradictions between the pedagogical practice of teachers and teaching based on the perspective of the Cultural-Historical Theory. This research craved to develop and design a teachers' training processes that increase the quality of teachers life and ways of teaching in the Brazilian public school.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
This chapter investigates the significance of specialized journals for the development of modern language teaching. It begins by explaining the development of language journals up to the point at which language teaching reform really took off with the emergence of the so-called Reform Movement in the 1880s. The principal journal for this movement was Phonetische studien [Phonetic Studies] founded in 1888 and renamed Die neueren Sprachen [Modern languages] in 1894. The style of the early issues of this journal allows modern readers an insight into the discourse practices of that community of language scholars and teachers, the opportunity to hear its characteristic ‘voice’ and recreate the means by which modern foreign language teaching became an independent discipline.