948 resultados para Frequency Modulated Signals, Parameter Estimation, Signal-to-Noise-Ratio, Simulations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between facial shape and attractiveness has been extensively studied, yet few studies have investigated the underlying biological factors of an attractive face. Many researchers have proposed a link between female attractiveness and sex hormones, but there is little empirical evidence in support this assumption. In the present study we investigated the relationship between circulating sex hormones and attractiveness. We created prototypes by separately averaging photographs of 15 women with high and low levels of testosterone, estradiol, and testosterone-to-estradiol ratio levels, respectively. An independent set of facial images was then shape transformed toward these prototypes. We paired the resulting images in such a way that one face depicted a female with high hormone level and the other a low hormone level. Fifty participants were asked to choose the more attractive face of each pair. We found that low testosterone-to-estradiol ratio and low testosterone were positively associated with female facial attractiveness. There was no preference for faces with high estradiol levels. In an additional experiment with 36 participants we confirmed that a low testosterone-to-estradiol ratio plays a larger role than low testosterone alone. These results provide empirical evidence that an attractive female face is shaped by interacting effects of testosterone and estradiol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM To compare image quality and diagnostic confidence of 100 kVp CT pulmonary angiography (CTPA) in patients with body weights (BWs) below and above 100kg. MATERIALS AND METHODS The present retrospective study comprised 216 patients (BWs of 75-99kg, 114 patients; 100-125kg, 88 patients; >125kg, 14 patients), who received 100 kVp CTPA to exclude pulmonary embolism. The attenuation was measured and the contrast-to-noise ratio (CNR) was calculated in the pulmonary trunk. Size-specific dose estimates (SSDEs) were evaluated. Three blinded radiologists rated subjective image quality and diagnostic confidence. Results between the BW groups and between three body mass index (BMI) groups (BMI <25kg/m(2), BMI = 25-29.9kg/m(2), and BMI ≥30kg/m(2), i.e., normal weight, overweight, and obese patients) were compared using the Kruskal-Wallis test. RESULTS Vessel attenuation was higher and SDDE was lower in the 75-99kg group than at higher BWs (p-values between <0.001 and 0.03), with no difference between the 100-125 and >125kg groups (p = 0.892 and 1). Subjective image quality and diagnostic confidence were not different among the BW groups (p = 0.225 and 1). CNR was lower (p < 0.006) in obese patients than in normal weight or overweight subjects. Diagnostic confidence was not different in the BMI groups (p = 0.105). CONCLUSION CTPA at 100 kVp tube voltage can be used in patients weighing up to 125kg with no significant deterioration of subjective image quality and confidence. The applicability of 100 kVp in the 125-150kg BW range needs further testing in larger collectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES To find a threshold body weight (BW) below 100 kg above which computed tomography pulmonary angiography (CTPA) using reduced radiation and a reduced contrast material (CM) dose provides significantly impaired quality and diagnostic confidence compared with standard-dose CTPA. METHODS In this prospectively randomised study of 501 patients with suspected pulmonary embolism and BW <100 kg, 246 were allocated into the low-dose group (80 kVp, 75 ml CM) and 255 into the normal-dose group (100 kVp, 100 ml CM). Contrast-to-noise ratio (CNR) in the pulmonary trunk was calculated. Two blinded chest radiologists independently evaluated subjective image quality and diagnostic confidence. Data were compared between the normal-dose and low-dose groups in five BW subgroups. RESULTS Vessel attenuation did not differ between the normal-dose and low-dose groups within each BW subgroup (P = 1.0). The CNR was higher with the normal-dose compared with the low-dose protocol (P < 0.006) in all BW subgroups except for the 90-99 kg subgroup (P = 0.812). Subjective image quality and diagnostic confidence did not differ between CT protocols in all subgroups (P between 0.960 and 1.0). CONCLUSIONS Subjective image quality and diagnostic confidence with 80 kVp CTPA is not different from normal-dose protocol in any BW group up to 100 kg. KEY POINTS • 80 kVp CTPA is safe in patients weighing <100 kg • Reduced radiation and iodine dose still provide high vessel attenuation • Image quality and diagnostic confidence with low-dose CTPA is good • Diagnostic confidence does not deteriorate in obese patients weighing <100 kg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Apical surgery is an important treatment option for teeth with post-treatment periodontitis. Although apical surgery involves root-end resection, no morphometric data are yet available about root-end resection and its impact on the root-to-crown ratio (RCR). The present study assessed the length of apicectomy and calculated the loss of root length and changes of RCR after apical surgery. METHODS In a prospective clinical study, cone-beam computed tomography scans were taken preoperatively and postoperatively. From these images, the crown and root lengths of 61 roots (54 teeth in 47 patients) were measured before and after apical surgery. Data were collected relative to the cementoenamel junction (CEJ) as well as to the crestal bone level (CBL). One observer took all measurements twice (to calculate the intraobserver variability), and the means were used for further analysis. The following parameters were assessed for all treated teeth as well as for specific tooth groups: length of root-end resection and percentage change of root length, preoperative and postoperative RCRs, and percentage change of RCR after apical surgery. RESULTS The mean length of root-end resection was 3.58 ± 1.43 mm (relative to the CBL). This amounted to a loss of 33.2% of clinical and 26% of anatomic root length. There was an overall significant difference between the tooth groups (P < .05). There was also a statistically significant difference comparing mandibular and maxillary teeth (P < .05), but not for incisors/canines versus premolars/molars (P = .125). The mean preoperative and postoperative RCRs (relative to CEJ) were 1.83 and 1.35, respectively (P < .001). With regard to the CBL reference, the mean preoperative and postoperative RCRs were 1.08 and 0.71 (CBL), respectively (P < .001). The calculated changes of RCR after apical surgery were 24.8% relative to CEJ and 33.3% relative to CBL (P < .001). Across the different tooth groups, the mean RCR was not significantly different (P = .244 for CEJ and 0.114 for CBL). CONCLUSIONS This CBCT-based study demonstrated that the RCR is significantly changed after root-end resection in apical surgery irrespective of the clinical (CBL) or anatomic (CEJ) reference levels. The lowest, and thus clinically most critical, postoperative RCR was observed in maxillary incisors. Future clinical studies need to show the impact of resection length and RCR changes on the outcome of apical surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study deals with the mineralogical variability of siliceous and zeolitic sediments, porcellanites, and cherts at small intervals in the continuously cored sequence of Deep Sea Drilling Project Site 462. Skeletal opal is preserved down to a maximum burial depth of 390 meters (middle Eocene). Below this level, the tests are totally dissolved or replaced and filled by opal-CT, quartz, clinoptilolite, and calcite. Etching of opaline tests does not increase continously with deeper burial. Opal solution accompanied by a conspicuous formation of authigenic clinoptilolite has a local maximum in Core 16 (150 m). A causal relationship with the lower Miocene hiatus at this level is highly probable. Oligocene to Cenomanian sediments represent an intermediate stage of silica diagenesis: the opal-CT/quartz ratios of the silicified rocks are frequently greater than 1, and quartz filling pores or replacing foraminifer tests is more widespread than quartz which converted from an opal-CT precursor. As at other sites, there is a marked discontinuity of the transitions from biogenic opal via opal-CT to quartz with increasing depth of burial. Layers with unaltered opal-A alternate with porcellanite beds; the intensity of the opal-CT-to-quartz transformation changes very rapidly from horizon to horizon and obviously is not correlated with lithologic parameters. The silica for authigenic clinoptilolite was derived from biogenic opal and decaying volcanic components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En una planta de fusión, los materiales en contacto con el plasma así como los materiales de primera pared experimentan condiciones particularmente hostiles al estar expuestos a altos flujos de partículas, neutrones y grandes cargas térmicas. Como consecuencia de estas diferentes y complejas condiciones de trabajo, el estudio, desarrollo y diseño de estos materiales es uno de los más importantes retos que ha surgido en los últimos años para la comunidad científica en el campo de los materiales y la energía. Debido a su baja tasa de erosión, alta resistencia al sputtering, alta conductividad térmica, muy alto punto de fusión y baja retención de tritio, el tungsteno (wolframio) es un importante candidato como material de primera pared y como posible material estructural avanzado en fusión por confinamiento magnético e inercial. Sin embargo, el tiempo de vida del tungsteno viene controlado por diversos factores como son su respuesta termo-mecánica en la superficie, la posibilidad de fusión y el fallo por acumulación de helio. Es por ello que el tiempo de vida limitado por la respuesta mecánica del tungsteno (W), y en particular su fragilidad, sean dos importantes aspectos que tienes que ser investigados. El comportamiento plástico en materiales refractarios con estructura cristalina cúbica centrada en las caras (bcc) como el tungsteno está gobernado por las dislocaciones de tipo tornillo a escala atómica y por conjuntos e interacciones de dislocaciones a escalas más grandes. El modelado de este complejo comportamiento requiere la aplicación de métodos capaces de resolver de forma rigurosa cada una de las escalas. El trabajo que se presenta en esta tesis propone un modelado multiescala que es capaz de dar respuestas ingenieriles a las solicitudes técnicas del tungsteno, y que a su vez está apoyado por la rigurosa física subyacente a extensas simulaciones atomísticas. En primer lugar, las propiedades estáticas y dinámicas de las dislocaciones de tipo tornillo en cinco potenciales interatómicos de tungsteno son comparadas, determinando cuáles de ellos garantizan una mayor fidelidad física y eficiencia computacional. Las grandes tasas de deformación asociadas a las técnicas de dinámica molecular hacen que las funciones de movilidad de las dislocaciones obtenidas no puedan ser utilizadas en los siguientes pasos del modelado multiescala. En este trabajo, proponemos dos métodos alternativos para obtener las funciones de movilidad de las dislocaciones: un modelo Monte Cario cinético y expresiones analíticas. El conjunto de parámetros necesarios para formular el modelo de Monte Cario cinético y la ley de movilidad analítica son calculados atomísticamente. Estos parámetros incluyen, pero no se limitan a: la determinación de las entalpias y energías de formación de las parejas de escalones que forman las dislocaciones, la parametrización de los efectos de no Schmid característicos en materiales bcc,etc. Conociendo la ley de movilidad de las dislocaciones en función del esfuerzo aplicado y la temperatura, se introduce esta relación como ecuación de flujo dentro de un modelo de plasticidad cristalina. La predicción del modelo sobre la dependencia del límite de fluencia con la temperatura es validada experimentalmente con ensayos uniaxiales en tungsteno monocristalino. A continuación, se calcula el límite de fluencia al aplicar ensayos uniaxiales de tensión para un conjunto de orientaciones cristalográticas dentro del triángulo estándar variando la tasa de deformación y la temperatura de los ensayos. Finalmente, y con el objetivo de ser capaces de predecir una respuesta más dúctil del tungsteno para una variedad de estados de carga, se realizan ensayos biaxiales de tensión sobre algunas de las orientaciones cristalográficas ya estudiadas en función de la temperatura.-------------------------------------------------------------------------ABSTRACT ----------------------------------------------------------Tungsten and tungsten alloys are being considered as leading candidates for structural and functional materials in future fusion energy devices. The most attractive properties of tungsten for the design of magnetic and inertial fusion energy reactors are its high melting point, high thermal conductivity, low sputtering yield and low longterm disposal radioactive footprint. However, tungsten also presents a very low fracture toughness, mostly associated with inter-granular failure and bulk plasticity, that limits its applications. As a result of these various and complex conditions of work, the study, development and design of these materials is one of the most important challenges that have emerged in recent years to the scientific community in the field of materials for energy applications. The plastic behavior of body-centered cubic (bcc) refractory metals like tungsten is governed by the kink-pair mediated thermally activated motion of h¿ (\1 11)i screw dislocations on the atomistic scale and by ensembles and interactions of dislocations at larger scales. Modeling this complex behavior requires the application of methods capable of resolving rigorously each relevant scale. The work presented in this thesis proposes a multiscale model approach that gives engineering-level responses to the technical specifications required for the use of tungsten in fusion energy reactors, and it is also supported by the rigorous underlying physics of extensive atomistic simulations. First, the static and dynamic properties of screw dislocations in five interatomic potentials for tungsten are compared, determining which of these ensure greater physical fidelity and computational efficiency. The large strain rates associated with molecular dynamics techniques make the dislocation mobility functions obtained not suitable to be used in the next steps of the multiscale model. Therefore, it is necessary to employ mobility laws obtained from a different method. In this work, we suggest two alternative methods to get the dislocation mobility functions: a kinetic Monte Carlo model and analytical expressions. The set of parameters needed to formulate the kinetic Monte Carlo model and the analytical mobility law are calculated atomistically. These parameters include, but are not limited to: enthalpy and energy barriers of kink-pairs as a function of the stress, width of the kink-pairs, non-Schmid effects ( both twinning-antitwinning asymmetry and non-glide stresses), etc. The function relating dislocation velocity with applied stress and temperature is used as the main source of constitutive information into a dislocation-based crystal plasticity framework. We validate the dependence of the yield strength with the temperature predicted by the model against existing experimental data of tensile tests in singlecrystal tungsten, with excellent agreement between the simulations and the measured data. We then extend the model to a number of crystallographic orientations uniformly distributed in the standard triangle and study the effects of temperature and strain rate. Finally, we perform biaxial tensile tests and provide the yield surface as a function of the temperature for some of the crystallographic orientations explored in the uniaxial tensile tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural gas hydrates are clathrates in which water molecules form a crystalline framework that includes and is stabilized by natural gas (mainly methane) at appropriate conditions of high pressures and low temperatures. The conditions for the formation of gas hydrates are met within continental margin sediments below water depths greater than about 500 m where the supply of methane is sufficient to stabilize the gas hydrate. Observations on DSDP Leg 11 suggested the presence of gas hydrates in sediments of the Blake Outer Ridge. Leg 76 coring and sampling confirms that, indeed, gas hydrates are present there. Geochemical evidence for gas hydrates in sediment of the Blake Outer Ridge includes (1) high concentrations of methane, (2) a sediment sample with thin, matlike layers of white crystals that released a volume of gas twenty times greater than its volume of pore fluid, (3) a molecular distribution of hydrocarbon gases that excluded hydrocarbons larger than isobutane, (4) results from pressure core barrel experiments, and (5) pore-fluid chemistry. The molecular composition of the hydrocarbons in these gas hydrates and the isotopic composition of the methane indicate that the gas is derived mainly from microbiological processes operating on the organic matter within the sediment. Although gas hydrates apparently are widespread on the Blake Outer Ridge, they probably are not of great economic significance as a potential, unconventional, energy resource or as an impermeable cap for trapping upwardly migrating gas at Site 533.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Techniques for improving the signal to clutter ratio of an. ultra-wideband SAR designed to detect small mine-like objects in the surface of the ground were investigated. In particular, images were collected using different bistatic antenna configurations in an attempt to decorrelate the clutter with respect to the targets. The images were converted to a reference depression angle, summed, and then converted to ground coordinates. The resulting target strengths were then compared with the amplitude distribution of the ground clutter to show the improvement obtained. While some improvement was demonstrated, this was for the relatively easy scenario of targets on the surface partially obscured by grass. Detection based on thresholding the raw RF signal (the bipolar response) rather than the envelope (baseband I-2 + Q(2)) was also considered to further enhance target-to-clutter ratios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with assessing the interference rejection capabilities of linear and circular array of dipoles that can be part of a base station of a code-division multiple-access cellular communication system. The performance criteria for signal-to-interference ratio (SIR) improvement employed in this paper is the spatial interference suppression coefficient. We first derive an expression for this figure of merit and then analyze and compare the SIR performance of the two types of arrays. For a linear array, we quantitatively assess the degradation in SIR performance, as we move from array broadside to array end-fire direction. In addition, the effect of mutual coupling is taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä työssä perehdytään soodakattiloiden vesikiertomallin rakentamiseen. Työn päätavoitteena on kehittää simulointimallia varten taulukkolaskentapohja, jonka avulla soodakattilan lämpövuotietoja on yksinkertaista ja nopeaa käsitellä ja siirtää Apros 6 -simulointiohjelmaan. Lisäksi tarkoituksena on pyrkiä automatisoimaan työvaiheet mahdollisimman pitkälle, jolloin vesikiertolaskennan tekeminen yksinkertaistuisi, yhtenäistyisi ja tarkentuisi. Tämä on mahdollista Excel- makrojen ja Apros 6:n uusien toimintojen avulla. Apros 6:ssa on nyt mahdollista hyödyntää SCL- komentotiedostoja, joiden avulla sujuva tiedonsiirto Aproksen ja Excelin välillä vodaan toteuttaa. Vesikiertolaskentaan käytettävän datan käsittely on aikaisemmin ollut työlästä ja sen tarkkuus on pitkälti riippunut mallintajasta. Tässä diplomityössä päästään hyödyntämään uusimpia ja realistisempia soodakattiloiden CFD- malleja, joiden avulla pystytään luomaan aikaisempaa tarkemmat lämpövuojakaumat soodakattilan lämpöpinnoille. Tämä muutos parantaa vesikiertolaskennan tarkkuutta. Työn kokeellisessa osassa uutta Excel laskentatyökalua ja uusia lämpövuoarvoja testataan käytännössä. Eräs vanha Apros- vesikiertomalli päivitetään uusilla lämpövuoarvoilla ja sen rakenteeseen tehdään muutoksia tarkkuuden parantamiseksi. Uuden mallin toimivuutta testataan myös 115 %:n kapasiteetilla ja tutkitaan kuinka kyseinen vesikiertopiiri reagoi suurempaan lämpötehoon. Näitä kolmea eri tilannetta vertaillaan toisiinsa ja tarkastellaan eroavaisuuksia niiden vesi-höyrypiireissä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Voice acoustic analysis is becoming more and more usefúl in diagnosis of voice disorders or laryngological pathologies. The facility to record a voice sigiial is an advantage over other invasive techniques. This paper presents the statistical analyzes ofa set of voice parameters like jitter, shimmer and HNR over a 4 groups of subjects vvith dysphonia, fünctional dysphonia, hyperfünctional dysphonia, and psychogenic dysphonia and a control group. No statistical signifícance differences over pathologic groups were found but clear tendencies can be seen between pathologic and control group. The tendencies indicates this parameters as a good features to be used in an intelligent diagnosis system, moreover the jitter and shimmer parameters measured over different tones and vowels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2-Aminothiazole covalently attached to a silica gel surface was prepared in order to obtain an adsorbent for Hg(II) ions having the following characteristics: good sorption capacity, chemical stability under conditions of use, and, especially, high selectivity. The accumulation voltammetry of mercury(II) was investigated at a carbon paste electrode chemically modified with silica gel functionalized with 2-aminothiazole (SIAMT-CPE). The repetitive cyclic voltammogram of mercury(II) solution in the potential range -0.2 to + 0.6 V versus Ag/AgCl (0.02 mol L-1 KNO3; V = 20 mV s(-1)) show two peaks one at about 0.1 V and other at 0.205 V. The anodic wave peak at 0.205 V is well defined and does not change during the cycles and it was therefore further investigated for analytical purposes using differential pulse anodic stripping voltammetry in differents supporting electrolytes. The mercury response was evaluated with respect to pH, electrode composition, preconcentration time, mercury concentration, cleaning solution, possible interferences and other variables. The precision for six determinations (n = 6) of 0.02 and 0.20 mg L-1 Hg(II) was 4.1 and 3.5% (relative standard deviation), respectively. The detection limit was estimated as 0.10 mu g L-1 mercury(II) by means of 3:1 current-to-noise ratio in connection with the optimization of the various parameters involved and using the highest-possible analyser sensitivity. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.