774 resultados para Observers
Resumo:
In 2009, South American military spending reached a total of $51.8 billion, a fifty percent increased from 2000 expenditures. The five-year moving average of arms transfers to South America was 150 percent higher from 2005 to 2009 than figures for 2000 to 2004.[1] These figures and others have led some observers to conclude that Latin America is engaged in an arms race. Other reasons, however, account for Latin America’s large military expenditure. Among them: Several countries have undertaken long-prolonged modernization efforts, recently made possible by six years of consistent regional growth.[2] A generational shift is at hand. Armed Forces are beginning to shed the stigma and association with past dictatorial regimes.[3] Countries are pursuing specific individual strategies, rather than reacting to purchases made by neighbors. For example, Brazil wants to attain greater control of its Amazon rainforests and offshore territories, Colombia’s spending demonstrates a response to internal threats, and Chile is continuing a modernization process begun in the 1990s.[4] Concerns remain, however: Venezuela continues to demonstrate poor democratic governance and a lack of transparency; neighbor-state relations between Colombia and Venezuela, Peru and Chile, and Bolivia and Paraguay, must all continue to be monitored; and Brazil’s military purchases, although legitimate, will likely result in a large accumulation of equipment.[5] These concerns can be best addressed by strengthening and garnering greater participation in transparent procurement mechanism.[6] The United States can do its part by supporting Latin American efforts to embrace the transparency process. _________________ [1] Bromley, Mark, “An Arms Race in Our Hemisphere? Discussing the Trends and Implications of Military Expenditures in South America,” Brookings Institution Conference, Washington, D.C., June 3rd, 2010, Transcript Pgs. 12,13, and 16 [2] Robledo, Marcos, “The Rearmament Debate: A Chilean Perspective,” Power Point presentation, slide 18, 2010 Western Hemisphere Security Colloquium, Miami, Florida, May 25th-26th, 2010 [3] Yopo, Boris, “¿Carrera Armamentista en la Regiόn?” La Tercera, November 2nd, 2009, http://www.latercera.com/contenido/895_197084_9.shtml, accessed October 8th, 2010 [4] Walser, Ray, “An Arms Race in Our Hemisphere? Discussing the Trends and Implications of Military Expenditures in South America,” Brookings Institution Conference, Washington, D.C., June 3rd, 2010, Transcript Pgs. 49,50,53 and 54 [5] Ibid., Guevara, Iñigo, Pg. 22 [6] Ibid., Bromley, Mark, Pgs. 18 and 19
Resumo:
A comprehensive, broadly accepted vegetation classification is important for ecosystem management, particularly for planning and monitoring. South Florida vegetation classification systems that are currently in use were largely arrived at subjectively and intuitively with the involvement of experienced botanical observers and ecologists, but with little support in terms of quantitative field data. The need to develop a field data-driven classification of South Florida vegetation that builds on the ecological organization has been recognized by the National Park Service and vegetation practitioners in the region. The present work, funded by the National Park Service Inventory and Monitoring Program - South Florida/Caribbean Network (SFCN), covers the first stage of a larger project whose goal is to apply extant vegetation data to test, and revise as necessary, an existing, widely used classification (Rutchey et al. 2006). The objectives of the first phase of the project were (1) to identify useful existing datasets, (2) to collect these data and compile them into a geodatabase, (3) to conduct an initial classification analysis of marsh sites, and (4) to design a strategy for augmenting existing information from poorly represented landscapes in order to develop a more comprehensive south Florida classification.
Resumo:
The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.
Resumo:
As long as governmental institutions have existed, efforts have been undertaken to reform them. This research examines a particular strategy, coercive controls, exercised through a particular instrument, executive orders, by a singular reformer, the president of the United States. The presidents studied- Johnson, Nixon, Ford, Carter, Reagan, Bush, and Clinton-are those whose campaigns for office were characterized to varying degrees as against Washington bureaucracy and for executive reform. Executive order issuance is assessed through an examination of key factors for each president including political party affiliation, levels of political capital, and legislative experience. A classification typology is used to identify the topical dimensions and levels of coerciveness. The portrayal of the federal government is analyzed through examination of public, media, and presidential attention. The results show that executive orders are significant management tools for the president. Executive orders also represent an important component of the transition plans for incoming administrations. The findings indicate that overall, while executive orders have not increased in the aggregate, they are more intrusive and significant. When the factors of political party affiliation, political capital, and legislative experience are examined, it reveals a strong relationship between executive orders and previous executive experience, specifically presidents who served as a state governor prior to winning national election as president. Presidents Carter, Reagan, and Clinton (all former governors) have the highest percent of executive orders focusing on the federal bureaucracy. Additionally, the highest percent of forceful orders were issued by former governors (41.0%) as compared to their presidential counterparts who have not served as governors (19.9%). Secondly, political party affiliation is an important, but not significant, predictor for the use of executive orders. Thirdly, management strategies that provide the president with the greatest level of autonomy-executive orders redefine the concept of presidential power and autonomous action. Interviews of elite government officials and political observers support the idea that executive orders can provide the president with a successful management strategy, requiring less expenditure of political resources, less risk to political capital, and a way of achieving objectives without depending on an unresponsive Congress.
Resumo:
The detection and diagnosis of faults, ie., find out how , where and why failures occur is an important area of study since man came to be replaced by machines. However, no technique studied to date can solve definitively the problem. Differences in dynamic systems, whether linear, nonlinear, variant or invariant in time, with physical or analytical redundancy, hamper research in order to obtain a unique solution . In this paper, a technique for fault detection and diagnosis (FDD) will be presented in dynamic systems using state observers in conjunction with other tools in order to create a hybrid FDD. A modified state observer is used to create a residue that allows also the detection and diagnosis of faults. A bank of faults signatures will be created using statistical tools and finally an approach using mean squared error ( MSE ) will assist in the study of the behavior of fault diagnosis even in the presence of noise . This methodology is then applied to an educational plant with coupled tanks and other with industrial instrumentation to validate the system.
Resumo:
This research is a study that deals with the language of the players bodies of Ciranda (a typical dance in circle in Brazil) – more specifically the one of Lia from Itamaracá. Our interest is to observe how this body dances, communicates, writes on time and space, establishing relations that complement and help to remain in construction. Thus, in a circular way and in an energy that is transmitted with the contact of the touch of the hands, in the power of song, in a circle that can be seen from many places, but by different angles, holding on that the particularities of its subjects players/dancers/observers, and that we propose ourselves to think: who are those players bodies and how do they build the circles of Ciranda? Therefore, during the pathway of the research, we were conducted by the phenomenological method and, from this, used the concepts of lived and sensible world. Our interest in this manifestation is, also, the body that dances and insert itself in the artistic expression, meaning and opening itself to the knowledge by the experience. Therefore, we assume a conception of the body that refers itself in the merleaupontyana fenomenologic approach, in this way, in its criativity in relation to the body as a fragmented being, as it is pointed by the Cartesiana theory. In this perspective, we understand the body in its relations with the culturals, sociais, economics and artistics issues that integrates it, in others words, in the relations that helped us to better understand the body as it is. This way, this research has as main objective to present the refletions about the players body, mainly, of Lia from Itamaracá and with this body to dance, to communicate, to write itself on time and space, estabilishing relations that complement and help to stay in construction. Such a statement leads us in this work, to inquire, for example: what mobilizes those subjects on this dance? We understand that elements as the place that is always in modification, the costumes, the musicality and the change in the movimentation of the players body in each new circle, those elements are factors that activate a permanente reconfiguration that are happening in the Ciranda dance nowadays. Finally, we assert that this investigation comes in reason to the big dimension that the Ciranda has achivied in Brazil, especially in the Northeast, as the existence of few references and registers of the reseach manifestation in the academic areas. It is possible to verify, in this research, that, in reason to this spreading, the nuances of the players bodies is even more diversified and that the missing of experiences on Itamaracá island - PE, its origin has putting away the original and community, becames, more and more, a dance of others stages and squares.
Resumo:
Automation of managed pressure drilling (MPD) enhances the safety and increases efficiency of drilling and that drives the development of controllers and observers for MPD. The objective is to maintain the bottom hole pressure (BHP) within the pressure window formed by the reservoir pressure and fracture pressure and also to reject kicks. Practical MPD automation solutions must address the nonlinearities and uncertainties caused by the variations in mud flow rate, choke opening, friction factor, mud density, etc. It is also desired that if pressure constraints are violated the controller must take appropriate actions to reject the ensuing kick. The objectives are addressed by developing two controllers: a gain switching robust controller and a nonlinear model predictive controller (NMPC). The robust gain switching controller is designed using H1 loop shaping technique, which was implemented using high gain bumpless transfer and 2D look up table. Six candidate controllers were designed in such a way they preserve robustness and performance for different choke openings and flow rates. It is demonstrated that uniform performance is maintained under different operating conditions and the controllers are able to reject kicks using pressure control and maintain BHP during drill pipe extension. The NMPC was designed to regulate the BHP and contain the outlet flow rate within certain tunable threshold. The important feature of that controller is that it can reject kicks without requiring any switching and thus there is no scope for shattering due to switching between pressure and flow control. That is achieved by exploiting the constraint handling capability of NMPC. Active set method was used for computing control inputs. It is demonstrated that NMPC is able to contain kicks and maintain BHP during drill pipe extension.
Resumo:
Perception of simultaneity and temporal order is studied with simultaneity judgment (SJ) and temporal-order judgment (TOJ) tasks. In the former, observers report whether presentation of two stimuli was subjectively simultaneous; in the latter, they report which stimulus was subjectively presented first. SJ and TOJ tasks typically give discrepant results, which has prompted the view that performance is mediated by different processes in each task. We looked at these discrepancies from a model that yields psychometric functions whose parameters characterize the timing, decisional, and response processes involved in SJ and TOJ tasks. We analyzed 12 data sets from published studies in which both tasks had been used in within-subjects designs, all of which had reported differences in performance across tasks. Fitting the model jointly to data from both tasks, we tested the hypothesis that common timing processes sustain simultaneity and temporal order judgments, with differences in performance arising from task-dependent decisional and response processes. The results supported this hypothesis, also showing that model psychometric functions account for aspects of SJ and TOJ data that classical analyses overlook. Implications for research on perception of simultaneity and temporal order are discussed.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
Morgan, Dillenburger, Raphael, and Solomon have shown that observers can use different response strategies when unsure of their answer, and, thus, they can voluntarily shift the location of the psychometric function estimated with the method of single stimuli (MSS; sometimes also referred to as the single-interval, two-alternative method). They wondered whether MSS could distinguish response bias from a true perceptual effect that would also shift the location of the psychometric function. We demonstrate theoretically that the inability to distinguish response bias from perceptual effects is an inherent shortcoming of MSS, although a three-response format including also an "undecided" response option may solve the problem under restrictive assumptions whose validity cannot be tested with MSS data. We also show that a proper two-alternative forced-choice (2AFC) task with the three-response format is free of all these problems so that bias and perceptual effects can easily be separated out. The use of a three-response 2AFC format is essential to eliminate a confound (response bias) in studies of perceptual effects and, hence, to eliminate a threat to the internal validity of research in this area.
Resumo:
Research on the perception of temporal order uses either temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks, in both of which two stimuli are presented with some temporal delay and observers must judge the order of presentation. Results generally differ across tasks, raising concerns about whether they measure the same processes. We present a model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance. TOJ tasks imply specific decisional components that explain the discrepancy of results obtained with TOJ and SJ tasks. The model is also tested against published data on audiovisual temporal-order judgments, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks. Measures of latent point of subjective simultaneity and latent sensitivity are defined that are invariant across tasks by isolating the sensory parameters governing observed performance, whereas decisional parameters vary across tasks and account for observed differences across them. Our analyses concur with other evidence advising against the use of TOJ tasks in research on perception of temporal order.
Resumo:
This dissertation presents a study and experimental research on asymmetric coding of stereoscopic video. A review on 3D technologies, video formats and coding is rst presented and then particular emphasis is given to asymmetric coding of 3D content and performance evaluation methods, based on subjective measures, of methods using asymmetric coding. The research objective was de ned to be an extension of the current concept of asymmetric coding for stereo video. To achieve this objective the rst step consists in de ning regions in the spatial dimension of auxiliary view with di erent perceptual relevance within the stereo pair, which are identi ed by a binary mask. Then these regions are encoded with better quality (lower quantisation) for the most relevant ones and worse quality (higher quantisation) for the those with lower perceptual relevance. The actual estimation of the relevance of a given region is based on a measure of disparity according to the absolute di erence between views. To allow encoding of a stereo sequence using this method, a reference H.264/MVC encoder (JM) has been modi ed to allow additional con guration parameters and inputs. The nal encoder is still standard compliant. In order to show the viability of the method subjective assessment tests were performed over a wide range of objective qualities of the auxiliary view. The results of these tests allow us to prove 3 main goals. First, it is shown that the proposed method can be more e cient than traditional asymmetric coding when encoding stereo video at higher qualities/rates. The method can also be used to extend the threshold at which uniform asymmetric coding methods start to have an impact on the subjective quality perceived by the observers. Finally the issue of eye dominance is addressed. Results from stereo still images displayed over a short period of time showed it has little or no impact on the proposed method.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Cleaner shrimp (Decapoda) regularly interact with conspecifics and client reef fish, both of which appear colourful and finely patterned to human observers. However, whether cleaner shrimp can perceive the colour patterns of conspecifics and clients is unknown, because cleaner shrimp visual capabilities are unstudied. We quantified spectral sensitivity and temporal resolution using electroretinography (ERG), and spatial resolution using both morphological (inter-ommatidial angle) and behavioural (optomotor) methods in three cleaner shrimp species: Lysmata amboinensis, Ancylomenes pedersoni and Urocaridella antonbruunii. In all three species, we found strong evidence for only a single spectral sensitivity peak of (mean ± s.e.m.) 518 ± 5, 518 ± 2 and 533 ± 3 nm, respectively. Temporal resolution in dark-adapted eyes was 39 ± 1.3, 36 ± 0.6 and 34 ± 1.3 Hz. Spatial resolution was 9.9 ± 0.3, 8.3 ± 0.1 and 11 ± 0.5 deg, respectively, which is low compared with other compound eyes of similar size. Assuming monochromacy, we present approximations of cleaner shrimp perception of both conspecifics and clients, and show that cleaner shrimp visual capabilities are sufficient to detect the outlines of large stimuli, but not to detect the colour patterns of conspecifics or clients, even over short distances. Thus, conspecific viewers have probably not played a role in the evolution of cleaner shrimp appearance; rather, further studies should investigate whether cleaner shrimp colour patterns have evolved to be viewed by client reef fish, many of which possess tri- and tetra-chromatic colour vision and relatively high spatial acuity.
Resumo:
This paper introduces two new datasets on national level elections from 1975 to 2004. The data are grouped into two separate datasets, the Quality of Elections Data and the Data on International Election Monitoring. Together these data sets provide original information on elections, election observation and election quality, and will enable researchers to study a variety of research questions. The datasets will be publicly available and are maintained at a project website.