457 resultados para artifact


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research aims at studying the use of greeting cards, here understood as a literacy practice widely used in American society of the United States. In American culture, these cards become sources of information and memory about people‟s cycles of life, their experiences and their bonds of sociability enabled by means of the senses that the image and the word comprise. The main purpose of this work is to describe how this literacy practice occurs in American society. Theoretically, this research is based on studies of literacy (BARTON, HAMILTON, 1998; BAYHAM, 1995; HAMILTON, 2000; STREET, 1981, 1984, 1985, 1993, 2003), the contributions of social semiotics, associated with systemic-functional grammar (HALLIDAY; HASAN 1978, 1985, HALLIDAY, 1994, HALLIDAY; MATTHIESSEN, 2004), and the grammar of visual design (KRESS; LEITE-GARCIA, VAN LEEUWEN, 1997, 2004, 2006; KRESS; MATTHIESSEN, 2004). Methodologically, it is a study that falls within the qualitative paradigm of interpretative character, which adopts ethnographic tools in data generation. From this perspective, it makes use of “looking and asking” techniques (ERICKSON, 1986, p. 119), complemented by the technique of "registering", proposed by Paz (2008). The corpus comprises 104 printed cards, provided by users of this cultural artifact, from which we selected 24, and 11 e-cards, extracted from the internet, as well as verbalizations obtained by applying a questionnaire prepared with open questions asked in order to gather information about the perceptions and actions of these cards users with respect to this literacy practice. Data analysis reveals cultural, economic and social aspects of this practice and the belief that literacy practice of using printed greeting cards, despite the existence of virtual alternatives, is still very fruitful in American society. The study also allows users to comprehend that the cardholders position themselves and construct identities that are expressed in verbal and visual interaction in order to achieve the desired effect. As a result, it is understood that greeting cards are not unintentional, but loaded with ideology and power relations, among other aspects that are constitutive of them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Portuguese language textbooks, according to what has been preconized on the official document to education, have been configured on discursive genres imported from diverse spheres of human activity. Adverts, genre of ample social circulation, spread from the Advertising sphere to the schools and started being approached by these collectaneas as an object and a tool for teaching. Therefore, this research deals with the approach of ads in Portugese textbooks. These discursive practices matter for the impact or appeal they exert over the (new) consumers, among which High School students; for their representation in the capitalist system, which guides us on our relationships and social practices; and for the mix of languages that end up at their composition, once they encapsulate the spirit of our time, par excellence, the one from the verbal-visual genres. To understand the treatment given to these advertising pieces, from questions/commentaries related to them, two collections were selected by the Programa Nacional do Livro Didático – Textbook National Program (PNLD 2012) among the ones more used by public High Schools in Natal/RN. From Applied Linguistics, from mestizo, nomadic and inter/transdisciplinary identity (MOITA LOPES, 2009), this study falls within the discursive chain of the interpretive tradition of historical-cultural approach (FREITAS, 2010) and names the Bakhtin Circle and its language‟s dialogical conception as inescapable partners. The data of the colletaneas show that the genre approach can happen as concrete utterance, as linguistic artifact and as hybrid, at work with questions and without questions, with the predominance of its occurrence in the portion of the volume devoted to the study of grammar. In the literature chapters and production/interpretation of compositions, it insert is incipient or it doesn‟t happen in the volume. Such a provision has implications for multiliteracies (ROJO, 2012) of the citizen student, once the lack or the abundance of critical reading proposals for this genre, that demand from the student the exercise of knowledge that is necessary to the construction of linguistic and social meanings, can be responsible for guide to a more conscious consumerism (material and cultural) by the chief customers of the work under review. The approaches of the genres seems to indicate a gradual transition that such material have undergone, which means, from the focus on clauses to the focus on utterances, or even the approach as linguistic artifact to hybrid and the concrete utterance, in search of overcoming the traditional tendency of taking advantage of formal aspects of the language, to the detriment of enunciative ones, and for coming into harmony with the guidelines and parameters of teaching in contemporary times, bringing the school duties close to the rights in life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation argues that the book as we know it will not cease to be. It is somehow a manifesto praising the artifact of words in the scope of literature and scientific culture. The present work chooses Umberto Eco and Jean-Claude-Càrriere‟s book Não contem com o fim do livro (2010) as cognitive operator. It presents a brief overview of the evolution of the informational supports and narrates a history of the book as constructed by complex bases; it also highlights the permanent and current state of the book having in mind the concept of contemporary as proposed by the Italian philosopher Giorgio Agamben, as opposed by the ephemeral character of the technological informational supports; moreover, it elects the book as a tool for the learning of Science and Culture, as a school for life, as put by Edgar Morin when referring to the Romance genre, in some of his works regarding Education; it presents as supporting evidence, two interviews with book-lover scholars from the Federal University of Rio Grande do Norte. Science thinkers like Edgar Morin, Maria da Conceição de Almeida, Ilya Prigogine, Giorgio Agamben, Pierre Levy, Umberto Eco, Roger Chartier, among others are used as sources of theoretical references. The dissertation places itself in the interface between Literature, Complexity and Education.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A heat loop suitable for the study of thermal fouling and its relationship to corrosion processes was designed, constructed and tested. The design adopted was an improvement over those used by such investigators as Hopkins and the Heat Transfer Research Institute in that very low levels of fouling could be detected accurately, the heat transfer surface could be readily removed for examination and the chemistry of the environment could be carefully monitored and controlled. In addition, an indirect method of electrical heating of the heat transfer surface was employed to eliminate magnetic and electric effects which result when direct resistance heating is employed to a test section. The testing of the loop was done using a 316 stainless steel test section and a suspension of ferric oxide and water in an attempt to duplicate the results obtained by Hopkins. Two types of thermal ·fouling resistance versus time curves were obtained . (i) Asymptotic type fouling curve, similar to the fouling behaviour described by Kern and Seaton and other investigators, was the most frequent type of fouling curve obtained. Thermal fouling occurred at a steadily decreasing rate before reaching a final asymptotic value. (ii) If an asymptotically fouled tube was cooled with rapid cir- ·culation for periods up to eight hours at zero heat flux, and heating restarted, fouling recommenced at a high linear rate. The fouling results obtained were observed to be similar and 1n agreement with the fouling behaviour reported previously by Hopkins and it was possible to duplicate quite closely the previous results . This supports the contention of Hopkins that the fouling results obtained were due to a crevice corrosion process and not an artifact of that heat loop which might have caused electrical and magnetic effects influencing the fouling. The effects of Reynolds number and heat flux on the asymptotic fouling resistance have been determined. A single experiment to study the effect of oxygen concentration has been carried out. The ferric oxide concentration for most of the fouling trials was standardized at 2400 ppM and the range of Reynolds number and heat flux for the study was 11000-29500 and 89-121 KW/M², respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Police is Dead is an historiographic analysis whose objective is to change the terms by which contemporary humanist scholarship assesses the phenomenon currently termed neoliberalism. It proceeds by building an archeology of legal thought in the United States that spans the nineteenth and twentieth centuries. My approach assumes that the decline of certain paradigms of political consciousness set historical conditions that enable the emergence of what is to follow. The particular historical form of political consciousness I seek to reintroduce to the present is what I call “police:” a counter-liberal way of understanding social relations that I claim has particular visibility within a legal archive, but that has been largely ignored by humanist theory on account of two tendencies: first, an over-valuation of liberalism as Western history’s master signifier; and second, inconsistent and selective attention to law as a cultural artifact. The first part of my dissertation reconstructs an anatomy of police through close studies of court opinions, legal treatises, and legal scholarship. I focus in particular on juridical descriptions of intimate relationality—which police configured as a public phenomenon—and slave society apologetics, which projected the notion of community as an affective and embodied structure. The second part of this dissertation demonstrates that the dissolution of police was critical to emergence of a paradigm I call economism: an originally progressive economic framework for understanding social relations that I argue developed at the nexus of law and economics at the turn of the twentieth century. Economism is a way of understanding sociality that collapses ontological distinctions between formally distinct political subjects—i.e., the state, the individual, the collective—by reducing them to the perspective of economic force. Insofar as it was taken up and reoriented by neoliberal theory, this paradigm has become a hegemonic form of political consciousness. This project concludes by encouraging a disarticulation of economism—insofar as it is a form of knowledge—from neoliberalism as its contemporary doctrinal manifestation. I suggest that this is one way progressive scholarship can think about moving forward in the development of economic knowledge, rather than desiring to move backwards to a time before the rise of neoliberalism. Disciplinarily, I aim to show that understanding the legal historiography informing our present moment is crucial to this task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans la pratique actuelle de la curiethérapie à bas débit, l'évaluation de la dose dans la prostate est régie par le protocole défini dans le groupe de travail 43 (TG-43) de l'American Association of Physicists in Medicine. Ce groupe de travail suppose un patient homogène à base d'eau de même densité et néglige les changements dans l'atténuation des photons par les sources de curiethérapie. En considérant ces simplifications, les calculs de dose se font facilement à l'aide d'une équation, indiquée dans le protocole. Bien que ce groupe de travail ait contribué à l'uniformisation des traitements en curiethérapie entre les hôpitaux, il ne décrit pas adéquatement la distribution réelle de la dose dans le patient. La publication actuelle du TG-186 donne des recommandations pour étudier des distributions de dose plus réalistes. Le but de ce mémoire est d'appliquer ces recommandations à partir du TG-186 pour obtenir une description plus réaliste de la dose dans la prostate. Pour ce faire, deux ensembles d'images du patient sont acquis simultanément avec un tomodensitomètre à double énergie (DECT). Les artéfacts métalliques présents dans ces images, causés par les sources d’iode, sont corrigés à l'aide d’un algorithme de réduction d'artefacts métalliques pour DECT qui a été développé dans ce travail. Ensuite, une étude Monte Carlo peut être effectuée correctement lorsque l'image est segmentée selon les différents tissus humains. Cette segmentation est effectuée en évaluant le numéro atomique effectif et la densité électronique de chaque voxel, par étalonnage stoechiométrique propre au DECT, et en y associant le tissu ayant des paramètres physiques similaires. Les résultats montrent des différences dans la distribution de la dose lorsqu'on compare la dose du protocole TG-43 avec celle retrouvée avec les recommandations du TG-186.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extensive use of fossil fuels is leading to increasing CO2 concentrations in the atmosphere and causes changes in the carbonate chemistry of the oceans which represents a major sink for anthropogenic CO2. As a result, the oceans' surface pH is expected to decrease by ca. 0.4 units by the year 2100, a major change with potentially negative consequences for some marine species. Because of their carbonate skeleton, sea urchins and their larval stages are regarded as likely to be one of the more sensitive taxa. In order to investigate sensitivity of pre-feeding (2 days post-fertilization) and feeding (4 and 7 days post-fertilization) pluteus larvae, we raised Strongylocentrotus purpuratus embryos in control (pH 8.1 and pCO2 41 Pa e.g. 399 µatm) and CO2 acidified seawater with pH of 7.7 (pCO2 134 Pa e.g. 1318 µatm) and investigated growth, calcification and survival. At three time points (day 2, day 4 and day 7 post-fertilization), we measured the expression of 26 representative genes important for metabolism, calcification and ion regulation using RT-qPCR. After one week of development, we observed a significant difference in growth. Maximum differences in size were detected at day 4 (ca. 10 % reduction in body length). A comparison of gene expression patterns using PCA and ANOSIM clearly distinguished between the different age groups (Two way ANOSIM: Global R = 1) while acidification effects were less pronounced (Global R = 0.518). Significant differences in gene expression patterns (ANOSIM R = 0.938, SIMPER: 4.3% difference) were also detected at day 4 leading to the hypothesis that differences between CO2 treatments could reflect patterns of expression seen in control experiments of a younger larva and thus a developmental artifact rather than a direct CO2 effect. We found an up regulation of metabolic genes (between 10 to 20% in ATP-synthase, citrate synthase, pyruvate kinase and thiolase at day 4) and down regulation of calcification related genes (between 23 and 36% in msp130, SM30B, SM50 at day 4). Ion regulation was mainly impacted by up regulation of Na+/K+-ATPase at day 4 (15%) and down regulation of NHE3 at day 4 (45%). We conclude that in studies in which a stressor induces an alteration in the speed of development, it is crucial to employ experimental designs with a high time resolution in order to correct for developmental artifacts. This helps prevent misinterpretation of stressor effects on organism physiology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In radiotherapy planning, computed tomography (CT) images are used to quantify the electron density of tissues and provide spatial anatomical information. Treatment planning systems use these data to calculate the expected spatial distribution of absorbed dose in a patient. CT imaging is complicated by the presence of metal implants which cause increased image noise, produce artifacts throughout the image and can exceed the available range of CT number values within the implant, perturbing electron density estimates in the image. Furthermore, current dose calculation algorithms do not accurately model radiation transport at metal-tissue interfaces. Combined, these issues adversely affect the accuracy of dose calculations in the vicinity of metal implants. As the number of patients with orthopedic and dental implants grows, so does the need to deliver safe and effective radiotherapy treatments in the presence of implants. The Medical Physics group at the Cancer Centre of Southeastern Ontario and Queen's University has developed a Cobalt-60 CT system that is relatively insensitive to metal artifacts due to the high energy, nearly monoenergetic Cobalt-60 photon beam. Kilovoltage CT (kVCT) images, including images corrected using a commercial metal artifact reduction tool, were compared to Cobalt-60 CT images throughout the treatment planning process, from initial imaging through to dose calculation. An effective metal artifact reduction algorithm was also implemented for the Cobalt-60 CT system. Electron density maps derived from the same kVCT and Cobalt-60 CT images indicated the impact of image artifacts on estimates of photon attenuation for treatment planning applications. Measurements showed that truncation of CT number data in kVCT images produced significant mischaracterization of the electron density of metals. Dose measurements downstream of metal inserts in a water phantom were compared to dose data calculated using CT images from kVCT and Cobalt-60 systems with and without artifact correction. The superior accuracy of electron density data derived from Cobalt-60 images compared to kVCT images produced calculated dose with far better agreement with measured results. These results indicated that dose calculation errors from metal image artifacts are primarily due to misrepresentation of electron density within metals rather than artifacts surrounding the implants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection.