791 resultados para Luenberger observers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents a study and experimental research on asymmetric coding of stereoscopic video. A review on 3D technologies, video formats and coding is rst presented and then particular emphasis is given to asymmetric coding of 3D content and performance evaluation methods, based on subjective measures, of methods using asymmetric coding. The research objective was de ned to be an extension of the current concept of asymmetric coding for stereo video. To achieve this objective the rst step consists in de ning regions in the spatial dimension of auxiliary view with di erent perceptual relevance within the stereo pair, which are identi ed by a binary mask. Then these regions are encoded with better quality (lower quantisation) for the most relevant ones and worse quality (higher quantisation) for the those with lower perceptual relevance. The actual estimation of the relevance of a given region is based on a measure of disparity according to the absolute di erence between views. To allow encoding of a stereo sequence using this method, a reference H.264/MVC encoder (JM) has been modi ed to allow additional con guration parameters and inputs. The nal encoder is still standard compliant. In order to show the viability of the method subjective assessment tests were performed over a wide range of objective qualities of the auxiliary view. The results of these tests allow us to prove 3 main goals. First, it is shown that the proposed method can be more e cient than traditional asymmetric coding when encoding stereo video at higher qualities/rates. The method can also be used to extend the threshold at which uniform asymmetric coding methods start to have an impact on the subjective quality perceived by the observers. Finally the issue of eye dominance is addressed. Results from stereo still images displayed over a short period of time showed it has little or no impact on the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

China has achieved significant progress in terms of economic and social developments since implementation of reform and open policy in 1978. However, the rapid speed of economic growth in China has also resulted in high energy consumption and serious environmental problems, which hindering the sustainability of China's economic growth. This paper provides a framework for measuring eco-efficiency with CO2 emissions in Chinese manufacturing industries. We introduce a global Malmquist-Luenberger productivity index (GMLPI) that can handle undesirable factors within Data Envelopment Analysis (DEA). This study suggested after regulations imposed by the Chinese government, in the last stage of the analysis, i.e. during 2011–2012, the contemporaneous frontier shifts towards the global technology frontier in the direction of more desirable outputs and less undesirable outputs, i.e. producing less CO2 emissions, but the GMLPI drops slightly. This is an indication that the Chinese government needs to implement more policy regulations in order to maintain productivity index while reducing CO2 emissions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cleaner shrimp (Decapoda) regularly interact with conspecifics and client reef fish, both of which appear colourful and finely patterned to human observers. However, whether cleaner shrimp can perceive the colour patterns of conspecifics and clients is unknown, because cleaner shrimp visual capabilities are unstudied. We quantified spectral sensitivity and temporal resolution using electroretinography (ERG), and spatial resolution using both morphological (inter-ommatidial angle) and behavioural (optomotor) methods in three cleaner shrimp species: Lysmata amboinensis, Ancylomenes pedersoni and Urocaridella antonbruunii. In all three species, we found strong evidence for only a single spectral sensitivity peak of (mean ± s.e.m.) 518 ± 5, 518 ± 2 and 533 ± 3 nm, respectively. Temporal resolution in dark-adapted eyes was 39 ± 1.3, 36 ± 0.6 and 34 ± 1.3 Hz. Spatial resolution was 9.9 ± 0.3, 8.3 ± 0.1 and 11 ± 0.5 deg, respectively, which is low compared with other compound eyes of similar size. Assuming monochromacy, we present approximations of cleaner shrimp perception of both conspecifics and clients, and show that cleaner shrimp visual capabilities are sufficient to detect the outlines of large stimuli, but not to detect the colour patterns of conspecifics or clients, even over short distances. Thus, conspecific viewers have probably not played a role in the evolution of cleaner shrimp appearance; rather, further studies should investigate whether cleaner shrimp colour patterns have evolved to be viewed by client reef fish, many of which possess tri- and tetra-chromatic colour vision and relatively high spatial acuity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces two new datasets on national level elections from 1975 to 2004. The data are grouped into two separate datasets, the Quality of Elections Data and the Data on International Election Monitoring. Together these data sets provide original information on elections, election observation and election quality, and will enable researchers to study a variety of research questions. The datasets will be publicly available and are maintained at a project website.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My dissertation investigates twin financial interventions—urban development and emergency management—in a single small town. Once a thriving city drawing blacks as blue-collar workers during the Great Migration, Benton Harbor, Michigan has suffered from waves of out-migration, debt, and alleged poor management. Benton Harbor’s emphasis on high-end economic development to attract white-collar workers and tourism, amidst the poverty, unemployment, and disenfranchisement of black residents, highlights an extreme case of American urban inequality. At the same time, many bystanders and representative observers argue that this urban redevelopment scheme and the city’s takeover by the state represent Benton Harbor residents’ only hope for a better life. I interviewed 44 key players and observers in local politics and development, attended 20 public meetings, conducted three months of observations, and collected extensive archival data. Examining Benton Harbor’s time under emergency management and its luxury golf course development as two exemplars of a larger relationship, I find that the top-down processes allegedly intended to alleviate Benton Harbor’s inequality actually reproduce and deepen the city’s problems. I propose that the beneficiaries of both plans constitute a white urban regime active in Benton Harbor. I show how the white urban regime serves its interests by operating an extraction machine in the city, which serves to reproduce local poverty and wealth by directing resources toward the white urban regime and away from the city.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter, Ó hAdhmaill argues that responses to the global economic crisis which emerged in 2008 reflected a dominant ideological discourse, with ‘austerity’ being a tool in a wider agenda to reassert neoliberalist thinking in the global economy and welfare provision in the richer countries. In Ireland, North and South, however, the experience of, and responses to, the crisis and ‘austerity’ were different, reflecting different social, economic, and political contexts and influences, as well as different levels of democratic control. Ó hAdhmaill outlines some of these differences and argues that, while democratic control in smaller jurisdictions may be limited by the ‘real rulers’ of the world, global capital, people still have ‘agency’ and do not have to be mere passive observers of unfolding events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The violent merger of two carbon-oxygen white dwarfs has been proposed as a viable progenitor for some Type Ia supernovae. However, it has been argued that the strong ejecta asymmetries produced by this model might be inconsistent with the low degree of polarization typically observed in Type Ia supernova explosions. Here, we test this claim by carrying out a spectropolarimetric analysis for the model proposed by Pakmor et al. for an explosion triggered during the merger of a 1.1 and 0.9 M⊙ carbon-oxygen white dwarf binary system. Owing to the asymmetries of the ejecta, the polarization signal varies significantly with viewing angle. We find that polarization levels for observers in the equatorial plane are modest (≲1 per cent) and show clear evidence for a dominant axis, as a consequence of the ejecta symmetry about the orbital plane. In contrast, orientations out of the plane are associated with higher degrees of polarization and departures from a dominant axis. While the particular model studied here gives a good match to highly polarized events such as SN 2004dt, it has difficulties in reproducing the low polarization levels commonly observed in normal Type Ia supernovae. Specifically, we find that significant asymmetries in the element distribution result in a wealth of strong polarization features that are not observed in the majority of currently available spectropolarimetric data of Type Ia supernovae. Future studies will map out the parameter space of the merger scenario to investigate if alternative models can provide better agreement with observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are few professions in which visual acuity is as important as it is to radiologists. The diagnostic decision making process is composed of a number of events (detection or observation, interpretation and reporting), where the detection phase is subject to a number of physical and psychological phenomena that are critical to the process. Visual acuity is one phenomenon that has often been overlooked, and there is very little research assessing the impact of reduced visual acuity on diagnostic performance. The aim of this study was to investigate the impact of reduced visual acuity on an observer’s ability to detect simulated nodules in an anthropomorphic chest phantom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - In this study we aim to validate a method to assess the impact of reduced visual function and observer performance concurrently with a nodule detection task. Materials and methods - Three consultant radiologists completed a nodule detection task under three conditions: without visual defocus (0.00 Dioptres; D), and with two different magnitudes of visual defocus (−1.00 D and −2.00 D). Defocus was applied with lenses and visual function was assessed prior to each image evaluation. Observers evaluated the same cases on each occasion; this comprised of 50 abnormal cases containing 1–4 simulated nodules (5, 8, 10 and 12 mm spherical diameter, 100 HU) placed within a phantom, and 25 normal cases (images containing no nodules). Data was collected under the free-response paradigm and analysed using Rjafroc. A difference in nodule detection performance would be considered significant at p < 0.05. Results - All observers had acceptable visual function prior to beginning the nodule detection task. Visual acuity was reduced to an unacceptable level for two observers when defocussed to −1.00 D and for one observer when defocussed to −2.00 D. Stereoacuity was unacceptable for one observer when defocussed to −2.00 D. Despite unsatisfactory visual function in the presence of defocus we were unable to find a statistically significant difference in nodule detection performance (F(2,4) = 3.55, p = 0.130). Conclusion - A method to assess visual function and observer performance is proposed. In this pilot evaluation we were unable to detect any difference in nodule detection performance when using lenses to reduce visual function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Marine Strategy Framework Directive (2008/56/EC) (MSFD) requires that the European Commission (by 15 July 2010) should lay down criteria and methodological standards to allow consistency in approach in evaluating the extent to which Good Environmental Status (GES) is being achieved. ICES and JRC were contracted to provide scientific support for the Commission in meeting this obligation. A total of 10 reports have been prepared relating to the descriptors of GES listed in Annex I of the Directive. Eight reports have been prepared by groups of independent experts coordinated by JRC and ICES in response to this contract. In addition, reports for two descriptors (Contaminants in fish and other seafood and Marine Litter) were written by expert groups coordinated by DG SANCO and IFREMER respectively. A Task Group was established for each of the qualitative Descriptors. Each Task Group consisted of selected experts providing experience related to the four marine regions (the Baltic Sea, the North-east Atlantic, the Mediterranean Sea and the Black Sea) and an appropriate scope of relevant scientific expertise. Observers from the Regional Seas Conventions were also invited to each Task Group to help ensure the inclusion of relevant work by those Conventions. This is the report of Task Group 8 Contaminants and pollution effects.