8 resultados para popularity

em Duke University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of focal therapy is rapidly evolving and gaining popularity from both physician and patient perspectives. We review the rationale, candidate selection, and results of the first clinical studies of focal cryoablation for selected patients with low volume and low- to low-moderate-risk features of prostate cancer as an alternative to whole-gland treatment. In spite of improved understanding of the tumor biology of early stage disease, we currently have limited tools to select appropriate patients with low- to low-moderate risk unifocal or unilateral prostate cancer who may be amenable to focal therapy. From a technical point, a number of ablative treatment options for focal therapy are available, with cryoablation having the most clinical experience. Recently, several reports have been published from single and multi-institutional studies that discuss focal therapy as a reasonable balance between cancer control and quality-of-life outcomes. Retrospective pathologic data from large prostatectomy series, however, do not clearly reveal valid and reproducible criteria to select appropriate candidates for focal cryoablation because of the complexity of tumorigenesis in early stage disease. At this time, a more feasible option remains hemiablation of the prostate with reasonable certainty about the absence of clinically significant cancer lesion(s) on the contralateral side of the prostate based on three-dimensional transperineal prostate biopsy mapping studies. Minimally invasive, parenchyma-preserving cryoablation can be considered as a potential feasible option in the treatment armamentarium of early stage, localized prostate cancer in appropriately selected candidates. There is a need to further test this technique in randomized, multicenter clinical trials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Writing plays a central role in the communication of scientific ideas and is therefore a key aspect in researcher education, ultimately determining the success and long-term sustainability of their careers. Despite the growing popularity of e-learning, we are not aware of any existing study comparing on-line vs. traditional classroom-based methods for teaching scientific writing. METHODS: Forty eight participants from a medical, nursing and physiotherapy background from US and Brazil were randomly assigned to two groups (n = 24 per group): An on-line writing workshop group (on-line group), in which participants used virtual communication, google docs and standard writing templates, and a standard writing guidance training (standard group) where participants received standard instruction without the aid of virtual communication and writing templates. Two outcomes, manuscript quality was assessed using the scores obtained in Six subgroup analysis scale as the primary outcome measure, and satisfaction scores with Likert scale were evaluated. To control for observer variability, inter-observer reliability was assessed using Fleiss's kappa. A post-hoc analysis comparing rates of communication between mentors and participants was performed. Nonparametric tests were used to assess intervention efficacy. RESULTS: Excellent inter-observer reliability among three reviewers was found, with an Intraclass Correlation Coefficient (ICC) agreement = 0.931882 and ICC consistency = 0.932485. On-line group had better overall manuscript quality (p = 0.0017, SSQSavg score 75.3 +/- 14.21, ranging from 37 to 94) compared to the standard group (47.27 +/- 14.64, ranging from 20 to 72). Participant satisfaction was higher in the on-line group (4.3 +/- 0.73) compared to the standard group (3.09 +/- 1.11) (p = 0.001). The standard group also had fewer communication events compared to the on-line group (0.91 +/- 0.81 vs. 2.05 +/- 1.23; p = 0.0219). CONCLUSION: Our protocol for on-line scientific writing instruction is better than standard face-to-face instruction in terms of writing quality and student satisfaction. Future studies should evaluate the protocol efficacy in larger longitudinal cohorts involving participants from different languages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very long-term memory for popular music was investigated. Older and younger adults listened to 20-sec excerpts of popular songs drawn from across the 20th century. The subjects gave emotionality and preference ratings and tried to name the title, artist, and year of popularity for each excerpt. They also performed a cued memory test for the lyrics. The older adults' emotionality ratings were highest for songs from their youth; they remembered more about these songs, as well. However, the stimuli failed to cue many autobiographical memories of specific events. Further analyses revealed that the older adults were less likely than the younger adults to retrieve multiple attributes of a song together (i.e., title and artist) and that there was a significant positive correlation between emotion and memory, especially for the older adults. These results have implications for research on long-term memory, as well as on the relationship between emotion and memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The global value chain (GVC) concept has gained popularity as a way to analyze the international expansion and geographical fragmentation of contemporary supply chains and value creation and capture therein. It has been used broadly in academic publications that examine a wide range of global industries, and by many of the international organizations concerned with economic development. This note highlights some of the main features of GVC analysis and discusses the relationship between the core concepts of governance and upgrading. The key dynamics of contemporary global supply chains and their implications for global production and trade are illustrated by: (1) the consolidation of global value chains and the new geography of value creation and capture, with an emphasis on China; (2) the key roles of global supermarkets and private standards in agri-food supply chains; and (3) how the recent economic crisis contributes to shifting end markets and the regionalization of value chains. It concludes with a discussion of the future direction of GVC analysis and a potential collaboration with supply chain researchers. © 2012 Institute for Supply Management, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Few symbols of 1950s-1960s America remain as central to our contemporary conception of Cold War culture as the iconic ranch-style suburban home. While the house took center stage in the Nixon/Khrushchev kitchen debates as a symbol of modern efficiency and capitalist values, its popularity depended largely upon its obvious appropriation of vernacular architecture from the 19th century, those California haciendas and Texas dogtrots that dotted the American west. Contractors like William Levitt modernized the historical common houses, hermetically sealing their porous construction, all while using the ranch-style roots of the dwelling to galvanize a myth of an indigenous American culture. At a moment of intense occupational bureaucracy, political uncertainty and atomized social life, the rancher gave a self-identifying white consumer base reason to believe they could master their own plot in the expansive frontier. Only one example of America’s mid-century love affair with commodified vernacular forms, the ranch-style home represents a broad effort on the part of corporate and governmental interest groups to transform the vernacular into a style that expresses a distinctly homogenous vision of American culture. “Other than a Citizen” begins with an anatomy of that transformation, and then turns to the work of four poets who sought to reclaim the vernacular from that process of standardization and use it to countermand the containment-era strategies of Cold War America.

In four chapters, I trace references to common speech and verbal expressivity in the poetry and poetic theory of Charles Olson, Robert Duncan, LeRoi Jones/Amiri Baraka and Gwendolyn Brooks, against the historical backdrop of the Free-Speech Movement and the rise of mass-culture. When poets frame nonliterary speech within the literary page, they encounter the inability of writing to capture the vital ephemerality of verbal expression. Rather than treat this limitation as an impediment, the writers in my study use the poem to dramatize the fugitivity of speech, emphasizing it as a disruptive counterpoint to the technologies of capture. Where critics such as Houston Baker interpret the vernacular strictly in terms of resistance, I take a cue from the poets and argue that the vernacular, rooted etymologically at the intersection of domestic security and enslaved margin, represents a gestalt form, capable at once of establishing centralized power and sparking minor protest. My argument also expands upon Michael North’s exploration of the influence of minstrelsy and regionalism on the development of modernist literary technique in The Dialect of Modernism. As he focuses on writers from the early 20th century, I account for the next generation, whose America was not a culturally inferior collection of immigrants but an imperial power, replete with economic, political and artistic dominance. Instead of settling for an essentially American idiom, the poets in my study saw in the vernacular not phonetic misspellings, slang terminology and fragmented syntax, but the potential to provoke and thereby frame a more ethical mode of social life, straining against the regimentation of citizenship.

My attention to the vernacular argues for an alignment among writers who have been segregated by the assumption that race and aesthetics are mutually exclusive categories. In reading these writers alongside one another, “Other than a Citizen” shows how the avant-garde concepts of projective poetics and composition by field develop out of an interest in black expressivity. Conversely, I trace black radicalism and its emphasis on sociality back to the communalism practiced at the experimental arts college in Black Mountain, North Carolina, where Olson and Duncan taught. In pressing for this connection, my work reveals the racial politics embedded within the speech-based aesthetics of the postwar era, while foregrounding the aesthetic dimension of militant protest.

Not unlike today, the popular rhetoric of the Cold War insists that to be a citizen involves defending one’s status as a rightful member of an exclusionary nation. To be other than a citizen, as the poets in my study make clear, begins with eschewing the false certainty that accompanies categorical nominalization. In promoting a model of mutually dependent participation, these poets lay the groundwork for an alternative model of civic belonging, where volition and reciprocity replace compliance and self-sufficiency. In reading their lines, we become all the more aware of the cracks that run the length of our load-bearing walls.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation examines the publication history of a single work: John Calvin’s 1552 Quatre sermons de M. Jehan Calvin traictans des matières fort utiles pour nostre temps, avec briefve exposition du Pseaume lxxxvii. Overlooked for both its contribution to Calvin’s wider corpus and its surprising popularity in English translation, successive editions of Quatre sermons display how Calvin’s argument against the behavior of so-called “Nicodemites” was adapted to various purposes unrelated to refuting religious dissimulation. The present study contributes to research in Calvin’s anti-Nicodemism by highlighting the fruitfulness of focusing on a discrete work and its reception. Borrowing a term (“Newter”) from John Field’s 1579 translation of Quatre sermons, this study’s title adumbrates its argument. English translators capitalized on the intrinsic malleability of a nameless and faceless opponent, the Nicodemite, and the adaptability of Quatre sermons’ genre as a collection of sermons to reshape—or, if you will, disfigure—both Calvin’s original foes and his case against them to advance various new agenda. Yet they were not the first to use the reformer’s sermons this way. They could have learned this from Calvin himself.

My examination of Quatre sermons opens by setting the work in the context of Calvin’s other writings and his political situation (Introduction, chapters one and two). Calvin’s unrelenting literary assault on French Nicodemism over three decades has long been recognized for its consistency and negativity. Yet scholars have tended to neglect how Calvin’s polemic against religious dissimulation could exhibit significant flexibility according to the needs of his context. Whereas Calvin’s preface promises simply to revisit his previous argument against participation in the Mass, his approach to Nicodemism in Quatre sermons seems adapted to accomplish goals beyond decrying false worship, offering a carefully-crafted apology for Calvin’s pastoral authority directed at his political situation. Repeatedly emphasizing God’s purpose to bless his children through the ministry of a rightly-ordered church, Quatre sermons marks a shift in Calvin’s anti-Nicodemite rhetoric away from purely negative critique, stressing instead God’s provision of spiritual nurture via political exile. Read in light of Calvin’s 1552 context, two audiences emerge: sermons ostensibly targeting believers in France who hid their faith also appear especially designed to silence Calvin’s foes in Geneva.

The remainder of the study examines the reception of Quatre sermons in the rapidly shifting religious and social contexts of Marian and Elizabethan England, where it appeared in more unique editions than any of Calvin’s writings besides the Institutio and the reformer’s 1542/45 Genevan Catechism. Calvin’s anti-Nicodemism has not been examined for its distinct contribution to the overall English reception of his thought. Five English versions of Quatre sermons appeared between 1553 and 1584—four of these under a Protestant queen, a situation quite different from the French context Calvin addressed. After situating Calvin’s position within the currents of Tudor Protestant anti-Nicodemism (chapter three), I place each of the five translations in its particular context, investigating prefaces, appendices, marginalia, and translation methods to discover how and why individuals used Quatre sermons (chapters four to six). Like Calvin in 1552, those who brought Quatre sermons to English readers were not primarily concerned with Nicodemism. Rather, the malleability of Calvin’s Nicodemite as polemical opponent and the flexibility of Quatre sermons as a sequence of discrete, interrelated parts made it popular with those eager to press Calvin into the service of a variety of diverse goals he could not have imagined, including turning his anti-Nicodemism against fellow members of the English church.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: In light of evidence showing reduced criminal recidivism and cost savings, adult drug treatment courts have grown in popularity. However, the potential spillover benefits to family members are understudied. OBJECTIVES: To examine: (1) the overlap between parents who were convicted of a substance-related offense and their children's involvement with child protective services (CPS); and (2) whether parental participation in an adult drug treatment court program reduces children's risk for CPS involvement. METHODS: Administrative data from North Carolina courts, birth records, and social services were linked at the child level. First, children of parents convicted of a substance-related offense were matched to (a) children of parents convicted of a nonsubstance-related offense and (b) those not convicted of any offense. Second, we compared children of parents who completed a DTC program with children of parents who were referred but did not enroll, who enrolled for <90 days but did not complete, and who enrolled for 90+ days but did not complete. Multivariate logistic regression was used to model group differences in the odds of being reported to CPS in the 1 to 3 years following parental criminal conviction or, alternatively, being referred to a DTC program. RESULTS: Children of parents convicted of a substance-related offense were at greater risk of CPS involvement than children whose parents were not convicted of any charge, but DTC participation did not mitigate this risk. Conclusion/Importance: The role of specialty courts as a strategy for reducing children's risk of maltreatment should be further explored.