993 resultados para RAY-TRACING ALGORITHM
Resumo:
To maximise data output from single-shot astronomical images, the rejection of cosmic rays is important. We present the results of a benchmark trial comparing various cosmic ray rejection algorithms. The procedures assess relative performances and characteristics of the processes in cosmic ray detection, rates of false detections of true objects, and the quality of image cleaning and reconstruction. The cosmic ray rejection algorithms developed by Rhoads (2000, PASP, 112, 703), van Dokkum (2001, PASP, 113, 1420), Pych (2004, PASP, 116, 148), and the IRAF task xzap by Dickinson are tested using both simulated and real data. It is found that detection efficiency is independent of the density of cosmic rays in an image, being more strongly affected by the density of real objects in the field. As expected, spurious detections and alterations to real data in the cleaning process are also significantly increased by high object densities. We find the Rhoads' linear filtering method to produce the best performance in the detection of cosmic ray events; however, the popular van Dokkum algorithm exhibits the highest overall performance in terms of detection and cleaning.
Resumo:
We report the formation and structural properties of co-crystals containing gemfibrozil and hydroxy derivatives of t-butylamine H2NC(CH3)3-n(CH2OH)n, with n=0, 1, 2 and 3. In each case, a 1:1 co-crystal is formed, with transfer of a proton from the carboxylic acid group of gemfibrozil to the amino group of the t-butylamine derivative. All of the co-crystal materials prepared are polycrystalline powders, and do not contain single crystals of suitable size and/or quality for single crystal X-ray diffraction studies. Structure determination of these materials has been carried out directly from powder X-ray diffraction data, using the direct-space Genetic Algorithm technique for structure solution followed by Rietveld refinement. The structural chemistry of this series of co-crystal materials reveals well-defined structural trends within the first three members of the family (n=0, 1, 2), but significantly contrasting structural properties for the member with n=3. © 2007 Elsevier Inc. All rights reserved.
Resumo:
Aims. Long gamma-ray bursts (LGRBs) are associated with the deaths of massive stars and might therefore be a potentially powerful tool for tracing cosmic star formation. However, especially at low redshifts (z< 1.5) LGRBs seem to prefer particular types of environment. Our aim is to study the host galaxies of a complete sample of bright LGRBs to investigate the effect of the environment on GRB formation. Methods. We studied host galaxy spectra of the Swift/BAT6 complete sample of 14 z< 1 bright LGRBs. We used the detected nebular emission lines to measure the dust extinction, star formation rate (SFR), and nebular metallicity (Z) of the hosts and supplemented the data set with previously measured stellar masses M_*. The distributions of the obtained properties and their interrelations (e.g. mass-metallicity and SFR-M_* relations) are compared to samples of field star-forming galaxies. Results. We find that LGRB hosts at z< 1 have on average lower SFRs than if they were direct star formation tracers. By directly comparing metallicity distributions of LGRB hosts and star-forming galaxies, we find a good match between the two populations up to 12 +log (O/H)~8.4−8.5, after which the paucity of metal-rich LGRB hosts becomes apparent. The LGRB host galaxies of our complete sample are consistent with the mass-metallicity relation at similar mean redshift and stellar masses. The cutoff against high metallicities (and high masses) can explain the low SFR values of LGRB hosts. We find a hint of an increased incidence of starburst galaxies in the Swift/BAT6 z< 1 sample with respect to that of a field star-forming population. Given that the SFRs are low on average, the latter is ascribed to low stellar masses. Nevertheless, the limits on the completeness and metallicity availability of current surveys, coupled with the limited number of LGRB host galaxies, prevents us from investigating more quantitatively whether the starburst incidence is such as expected after taking into account the high-metallicity aversion of LGRB host galaxies.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The protein folding problem has been one of the most challenging subjects in biological physics due to its complexity. Energy landscape theory based on statistical mechanics provides a thermodynamic interpretation of the protein folding process. We have been working to answer fundamental questions about protein-protein and protein-water interactions, which are very important for describing the energy landscape surface of proteins correctly. At first, we present a new method for computing protein-protein interaction potentials of solvated proteins directly from SAXS data. An ensemble of proteins was modeled by Metropolis Monte Carlo and Molecular Dynamics simulations, and the global X-ray scattering of the whole model ensemble was computed at each snapshot of the simulation. The interaction potential model was optimized and iterated by a Levenberg-Marquardt algorithm. Secondly, we report that terahertz spectroscopy directly probes hydration dynamics around proteins and determines the size of the dynamical hydration shell. We also present the sequence and pH-dependence of the hydration shell and the effect of the hydrophobicity. On the other hand, kinetic terahertz absorption (KITA) spectroscopy is introduced to study the refolding kinetics of ubiquitin and its mutants. KITA results are compared to small angle X-ray scattering, tryptophan fluorescence, and circular dichroism results. We propose that KITA monitors the rearrangement of hydrogen bonding during secondary structure formation. Finally, we present development of the automated single molecule operating system (ASMOS) for a high throughput single molecule detector, which levitates a single protein molecule in a 10 µm diameter droplet by the laser guidance. I also have performed supporting calculations and simulations with my own program codes.
Resumo:
The discovery of scaling relations between the mass of the SMBH and some key physical properties of the host galaxy suggests that the growth of the SMBH and that of the galaxy are coupled, with the AGN activity and the star-formation (SF) processes influencing each other. Although the mechanism of this co-evolution are still a matter of debate, all scenarios agree that a key phase of the co-evolution is represented by the obscured accretion phase. This phase is of the co-evolution is the least studied, mostly due to the challenge in detecting and recognizing such obscured AGN. My thesis aims at investigating the AGN-galaxy co-evolution paradigm by identifying and studying AGN in the obscured accretion phase. The study of obscured AGN is key for our understanding of the feedback processes and of the mutual influence of the SF and the AGN activity. Moreover, these obscured and elusive AGN are needed to explain the X-ray background spectrum and to reconcile the measurements and the theoretical prediction of the BH accretion rate density. In this thesis, we firstly investigate the synergies between IR and X-ray missions in detecting and characterizing AGN, with a particular focus on the most obscured ones. We exploited UV/optical emission lines to select high-redshift obscured AGN at the cosmic noon, where the highest SFR density and BH accretion rate density are expected. We provide X-ray spectral analysis and UV-to-far-IR SED-fitting. We show that our samples host a significant fraction of very obscured sources; many of these are highly accreting. Finally, we performe a thoughtful investigation of a galaxy at z~5 with unusual and peculiar features, that lead us to identify a second extremely young population of stars and hidden AGN activity.
Resumo:
At the center of galaxy clusters, a dramatic interplay known as feedback cycle occurs between the hot intracluster medium (ICM) and the active galactic nucleus (AGN) of the central galaxy. The footprints of this interplay are evident from X-ray observations of the ICM, where X-ray cavities and shock fronts are associated with radio lobe emission tracing energetic AGN outbursts. While such jet activity reduces the efficiency of the hot gas to cool to lower temperatures, residual cooling can generate warm and cold gas clouds around the central galaxy. The condensed gas parcels can ultimately reach the core of the galaxy and be accreted by the AGN. This picture is the result of tremendous advances over the last three decades. Yet, a deeper understanding of the details of how the heating–cooling regulation is achieved and maintained is still missing. In this Thesis, we delve into key aspects of the feedback cycle. To this end, we leverage high-resolution (sub-arcsecond), multifrequency observations (mainly X-ray and radio) of several top-level facilities (e.g., Chandra, JVLA, VLBA, LOFAR). First, we investigate which conditions trigger a feedback response to gas cooling, by studying the properties of clusters where feedback is just about to start. Then, we focus on the details of how the AGN–ICM interaction progresses by examining cavity and shock heating in the cluster RBS797, an exemplary case of the jet feedback paradigm. Furthermore, we explore the importance of shock heating and the coupling of distinct jet power regimes (i.e., FRII, FRI and FR0 radio galaxies) to the environment. Ultimately, as heating models rely on the connection between the direct evidence (the jets) and the smoking gun (the X-ray cavities) of feedback, we examine the cases in which these two are dramatically misaligned.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Resumo:
Although MRI is utilized for planning the resection of soft-tissue tumors, it is not always capable of differentiating benign from malignant lesions. The risk of local recurrence of soft-tissue sarcomas is increased when biopsies are performed before resection and by inadequate resections. PET associated with computed tomography using fluorodeoxyglucose labeled with fluorine-18 ((18)F-FDG PET/CT) may help differentiate between benign and malignant tumors, thus avoiding inadequate resections and making prior biopsies unnecessary. The purpose of this study was to evaluate the usefulness of (18)F-FDG PET/CT in differentiating benign from malignant solid soft-tissue lesions. Patients with solid lesions of the limbs or abdominal wall detected by MRI were submitted to (18)F-FDG PET/CT. The maximum standardized uptake value (SUVmax) cutoff was determined to differentiate malignant from benign tumors. Regardless of the (18)F-FDG PET/CT results all patients underwent biopsy and surgery. MRI was performed in 54 patients, and 10 patients were excluded because of purely lipomatose or cystic lesions. (18)F-FDG PET/CT was performed in the remaining 44 patients. Histopathology revealed 26 (59%) benign and 18 (41%) malignant soft-tissue lesions. A significant difference in SUVmax was observed between benign and malignant soft-tissue lesions. The SUVmax cutoff of 3.0 differentiated malignant from benign lesions with 100% sensitivity, 83.3% specificity, 89.6% accuracy, 78.3% positive predictive value, and 100% negative predictive value. (18)F-FDG PET/CT seems to be able to differentiate benign from malignant soft-tissue lesions with good accuracy and very high negative predictive value. Incorporating (18)F-FDG PET/CT into the diagnostic algorithm of these patients may prevent inadequate resections and unnecessary biopsies.
Resumo:
In this work, the energy response functions of a CdTe detector were obtained by Monte Carlo (MC) simulation in the energy range from 5 to 160keV, using the PENELOPE code. In the response calculations the carrier transport features and the detector resolution were included. The computed energy response function was validated through comparison with experimental results obtained with (241)Am and (152)Eu sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a CdTe detector (model XR-100T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the CdTe exhibits good energy response at low energies (below 40keV), showing only small distortions on the measured spectra. For energies below about 80keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by a theoretical model of the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieving more accurate spectra from which quality parameters (i.e., half-value layer and homogeneity coefficient) can be determined.
Resumo:
X-ray fluorescence (XRF) is a fast, low-cost, nondestructive, and truly multielement analytical technique. The objectives of this study are to quantify the amount of Na(+) and K(+) in samples of table salt (refined, marine, and light) and to compare three different methodologies of quantification using XRF. A fundamental parameter method revealed difficulties in quantifying accurately lighter elements (Z < 22). A univariate methodology based on peak area calibration is an attractive alternative, even though additional steps of data manipulation might consume some time. Quantifications were performed with good correlations for both Na (r = 0.974) and K (r = 0.992). A partial least-squares (PLS) regression method with five latent variables was very fast. Na(+) quantifications provided calibration errors lower than 16% and a correlation of 0.995. Of great concern was the observation of high Na(+) levels in low-sodium salts. The presented application may be performed in a fast and multielement fashion, in accordance with Green Chemistry specifications.
Resumo:
Diagnostic imaging techniques play an important role in assessing the exact location, cause, and extent of a nerve lesion, thus allowing clinicians to diagnose and manage more effectively a variety of pathological conditions, such as entrapment syndromes, traumatic injuries, and space-occupying lesions. Ultrasound and nuclear magnetic resonance imaging are becoming useful methods for this purpose, but they still lack spatial resolution. In this regard, recent phase contrast x-ray imaging experiments of peripheral nerve allowed the visualization of each nerve fiber surrounded by its myelin sheath as clearly as optical microscopy. In the present study, we attempted to produce high-resolution x-ray phase contrast images of a human sciatic nerve by using synchrotron radiation propagation-based imaging. The images showed high contrast and high spatial resolution, allowing clear identification of each fascicle structure and surrounding connective tissue. The outstanding result is the detection of such structures by phase contrast x-ray tomography of a thick human sciatic nerve section. This may further enable the identification of diverse pathological patterns, such as Wallerian degeneration, hypertrophic neuropathy, inflammatory infiltration, leprosy neuropathy and amyloid deposits. To the best of our knowledge, this is the first successful phase contrast x-ray imaging experiment of a human peripheral nerve sample. Our long-term goal is to develop peripheral nerve imaging methods that could supersede biopsy procedures.
Resumo:
PURPOSE: To compare the Full Threshold (FT) and SITA Standard (SS) strategies in glaucomatous patients undergoing automated perimetry for the first time. METHODS: Thirty-one glaucomatous patients who had never undergone perimetry underwent automated perimetry (Humphrey, program 30-2) with both FT and SS on the same day, with an interval of at least 15 minutes. The order of the examination was randomized, and only one eye per patient was analyzed. Three analyses were performed: a) all the examinations, regardless of the order of application; b) only the first examinations; c) only the second examinations. In order to calculate the sensitivity of both strategies, the following criteria were used to define abnormality: glaucoma hemifield test (GHT) outside normal limits, pattern standard deviation (PSD) <5%, or a cluster of 3 adjacent points with p<5% at the pattern deviation probability plot. RESULTS: When the results of all examinations were analyzed regardless of the order in which they were performed, the number of depressed points with p<0.5% in the pattern deviation probability map was significantly greater with SS (p=0.037), and the sensitivities were 87.1% for SS and 77.4% for FT (p=0.506). When only the first examinations were compared, there were no statistically significant differences regarding the number of depressed points, but the sensitivity of SS (100%) was significantly greater than that obtained with FT (70.6%) (p=0.048). When only the second examinations were compared, there were no statistically significant differences regarding the number of depressed points, and the sensitivities of SS (76.5%) and FT (85.7%) (p=0.664). CONCLUSION: SS may have a higher sensitivity than FT in glaucomatous patients undergoing automated perimetry for the first time. However, this difference tends to disappear in subsequent examinations.
Resumo:
The reactions of meso-1,2-bis(phenylsulfinyl)ethane (meso-bpse) with Ph2SnCl2, 2-phenyl-1,3-dithiane trans-1-trans-3-dioxide (pdtd) with n-Bu2SnCl2 and 1,2-cis-bis-(phenylsulfinyl)ethene (rac-,cis-cbpse) with Ph2SnCl2, in 1:1 molar ratio, yielded [{Ph2SnCl2(meso-bpse)}n], [{n-Bu2SnCl2(pdtd)}2] and [{Ph2SnCl2(rac,cis-cbpse)}x] (x = 2 or n), respectively. All adducts were studied by IR, Mössbauer and 119Sn NMR spectroscopic methods, elemental analysis and single crystal X-ray diffractometry. The X-ray crystal structure of [{Ph2SnCl2(meso-bpse)}n] revealed the occurrence of infinite chains in which the tin(IV) atoms appear in a distorted octahedral geometry with Cl atoms in cis and Ph groups in trans positions. The X-ray crystal structure of [{n-Bu2SnCl2(pdtd)}2] revealed discrete centrosymmetric dimeric species in which the tin(IV) atoms possess a distorted octahedral geometry with bridging disulfoxides in cis and n-butyl moieties in trans positions. The spectroscopic data indicated that the adduct containing the rac,cis-cbpse ligand can be dimeric or polymeric. The X-ray structural analysis of the free rac-,cis-cbpse sulfoxide revealed that the crystals belong to the C2/c space group.
Resumo:
A practical method for the structural assignment of 3,4-O-benzylidene-D-ribono-1,5-lactones and analogues using conventional NMR techniques and NOESY measurements in solution is described. 2-O-Acyl-3,4-O-benzylidene-D-ribono-1,5-lactones were prepared in good yields by acylation of Zinner’s lactone with acyl chlorides under mildly basic conditions. Structural determination of 2-O-(4-nitrobenzoyl)-3,4-O-benzylidene-D-ribono-1,5-lactone was achieved by single crystal x-ray diffraction, which supports the results based on spectroscopic data.