959 resultados para lithic artifacts
Resumo:
We present a case where multi-phase post-mortem computed tomography angiography (PMCTA) induced a hemorrhagic pericardial effusion during the venous phase of angiography. Post-mortem non-contrast CT (PMCT) suggested the presence of a ruptured aortic dissection. This diagnosis was confirmed by PMCTA after pressure controlled arterial injection of contrast. During the second phase of multi-phase PMCTA the presence of contrast leakage from the inferior cava vein into the pericardial sac was noted. Autopsy confirmed the post-mortem nature of this vascular tear. This case teaches us an important lesson: it underlines the necessity to critically analyze PMCT and PMCTA images in order to distinguish between artifacts, true pathologies and iatrogenic findings. In cases with ambiguous findings such as the case reported here, correlation of imaging findings with autopsy is elementary.
Resumo:
Heinrich layers of the glacial North Atlantic record abrupt widespread iceberg rafting of detrital carbonate and other lithic material at the extreme-cold culminations of Bond climate cycles. Both internal (glaciologic) and external ( climate) forcings have been proposed. Here we suggest an explanation for the iceberg release that encompasses external climate forcing on the basis of a new glaciological process recently witnessed along the Antarctic Peninsula: rapid disintegrations of fringing ice shelves induced by climate-controlled meltwater infilling of surface crevasses. We postulate that peripheral ice shelves, formed along the eastern Canadian seaboard during extreme cold conditions, would be vulnerable to sudden climate-driven disintegration during any climate amelioration. Ice shelf disintegration then would be the source of Heinrich event icebergs.
Resumo:
Source materials like fine art, over-sized, fragile maps, and delicate artifacts have traditionally been digitally converted through the use of controlled lighting and high resolution scanners and camera backs. In addition the capture of items such as general and special collections bound monographs has recently grown both through consortial efforts like the Internet Archive's Open Content Alliance and locally at the individual institution level. These projects, in turn, have introduced increasingly higher resolution consumer-grade digital single lens reflex cameras or "DSLRs" as a significant part of the general cultural heritage digital conversion workflow. Central to the authors' discussion is the fact that both camera backs and DSLRs commonly share the ability to capture native raw file formats. Because these formats include such advantages as access to an image's raw mosaic sensor data within their architecture, many institutions choose raw for initial capture due to its high bit-level and unprocessed nature. However to date these same raw formats, so important to many at the point of capture, have yet to be considered "archival" within most published still imaging standards, if they are considered at all. Throughout many workflows raw files are deleted and thrown away after more traditionally "archival" uncompressed TIFF or JPEG 2000 files have been derived downstream from their raw source formats [1][2]. As a result, the authors examine the nature of raw anew and consider the basic questions, Should raw files be retained? What might their role be? Might they in fact form a new archival format space? Included in the discussion is a survey of assorted raw file types and their attributes. Also addressed are various sustainability issues as they pertain to archival formats with a special emphasis on both raw's positive and negative characteristics as they apply to archival practices. Current common archival workflows versus possible raw-based ones are investigated as well. These comparisons are noted in the context of each approach's differing levels of usable captured image data, various preservation virtues, and the divergent ideas of strictly fixed renditions versus the potential for improved renditions over time. Special attention is given to the DNG raw format through a detailed inspection of a number of its various structural components and the roles that they play in the format's latest specification. Finally an evaluation is drawn of both proprietary raw formats in general and DNG in particular as possible alternative archival formats for still imaging.
Resumo:
BACKGROUND: To investigate if non-rigid image-registration reduces motion artifacts in triggered and non-triggered diffusion tensor imaging (DTI) of native kidneys. A secondary aim was to determine, if improvements through registration allow for omitting respiratory-triggering. METHODS: Twenty volunteers underwent coronal DTI of the kidneys with nine b-values (10-700 s/mm2 ) at 3 Tesla. Image-registration was performed using a multimodal nonrigid registration algorithm. Data processing yielded the apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA). For comparison of the data stability, the root mean square error (RMSE) of the fitting and the standard deviations within the regions of interest (SDROI ) were evaluated. RESULTS: RMSEs decreased significantly after registration for triggered and also for non-triggered scans (P < 0.05). SDROI for ADC, FA, and FP were significantly lower after registration in both medulla and cortex of triggered scans (P < 0.01). Similarly the SDROI of FA and FP decreased significantly in non-triggered scans after registration (P < 0.05). RMSEs were significantly lower in triggered than in non-triggered scans, both with and without registration (P < 0.05). CONCLUSION: Respiratory motion correction by registration of individual echo-planar images leads to clearly reduced signal variations in renal DTI for both triggered and particularly non-triggered scans. Secondarily, the results suggest that respiratory-triggering still seems advantageous.J. Magn. Reson. Imaging 2014. (c) 2014 Wiley Periodicals, Inc.
Resumo:
To investigate the effect of metal implants in proton radiotherapy, dose distributions of different, clinically relevant treatment plans have been measured in an anthropomorphic phantom and compared to treatment planning predictions. The anthropomorphic phantom, which is sliced into four segments in the cranio-caudal direction, is composed of tissue equivalent materials and contains a titanium implant in a vertebral body in the cervical region. GafChromic® films were laid between the different segments to measure the 2D delivered dose. Three different four-field plans have then been applied: a Single-Field-Uniform-Dose (SFUD) plan, both with and without artifact correction implemented, and an Intensity-Modulated-Proton-Therapy (IMPT) plan with the artifacts corrected. For corrections, the artifacts were manually outlined and the Hounsfield Units manually set to an average value for soft tissue. Results show a surprisingly good agreement between prescribed and delivered dose distributions when artifacts have been corrected, with > 97% and 98% of points fulfilling the gamma criterion of 3%/3 mm for both SFUD and the IMPT plans, respectively. In contrast, without artifact corrections, up to 18% of measured points fail the gamma criterion of 3%/3 mm for the SFUD plan. These measurements indicate that correcting manually for the reconstruction artifacts resulting from metal implants substantially improves the accuracy of the calculated dose distribution.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Resumo:
Procurement of fresh tissue of prostate cancer is critical for biobanking and generation of xenograft models as an important preclinical step towards new therapeutic strategies in advanced prostate cancer. However, handling of fresh radical prostatectomy specimens has been notoriously challenging given the distinctive physical properties of prostate tissue and the difficulty to identify cancer foci on gross examination. Here, we have developed a novel approach using ceramic foam plates for processing freshly cut whole mount sections from radical prostatectomy specimens without compromising further diagnostic assessment. Forty-nine radical prostatectomy specimens were processed and sectioned from the apex to the base in whole mount slices. Putative carcinoma foci were morphologically verified by frozen section analysis. The fresh whole mount slices were then laid between two ceramic foam plates and fixed overnight. To test tissue preservation after this procedure, formalin-fixed and paraffin-embedded whole mount sections were stained with hematoxylin and eosin (H&E) and analyzed by immunohistochemistry, fluorescence, and silver in situ hybridization (FISH and SISH, respectively). There were no morphological artifacts on H&E stained whole mount sections from slices that had been fixed between two plates of ceramic foam, and the histological architecture was fully retained. The quality of immunohistochemistry, FISH, and SISH was excellent. Fixing whole mount tissue slices between ceramic foam plates after frozen section examination is an excellent method for processing fresh radical prostatectomy specimens, allowing for a precise identification and collection of fresh tumor tissue without compromising further diagnostic analysis.
Resumo:
Excavations of Neolithic (4000 – 3500 BC) and Late Bronze Age (1200 – 800 BC) wetland sites on the northern Alpine periphery have produced astonishing and detailed information about the life and human environment of prehistoric societies. It is even possible to reconstruct settlement histories and settlement dynamics, which suggest a high degree of mobility during the Neolithic. Archaeological finds—such as pottery—show local typological developments in addition to foreign influences. Furthermore, exogenous lithic forms indicate far reaching interaction. Many hundreds of bronze artefacts are recorded from the Late Bronze Age settlements, demonstrating that some wetland sites were centres of bronzework production. Exogenous forms of bronzework are relatively rare in the wetland settlements during the Late Bronze Age. However, the products produced in the lake-settlements can be found widely across central Europe, indicating their continued involvement in interregional exchange partnerships. Potential motivations and dynamics of the relationships between sites and other regions of Europe will be detailed using case studies focussing on the settlements Seedorf Lobsigensee (BE), Concise (VD), and Sutz-Lattrigen Hauptstation innen (BE), and an initial assessment of intra-site connectivity through Network Analysis of sites within the region of Lake Neuchâtel, Lake Biel, and Lake Murten.
Resumo:
Pre-clinical studies using murine models are critical for understanding the pathophysiological mechanisms underlying immune-mediated disorders such as Eosinophilic esophagitis (EoE). In this study, an optical coherence tomography (OCT) system capable of providing three-dimensional images with axial and transverse resolutions of 5 µm and 10 µm, respectively, was utilized to obtain esophageal images from a murine model of EoE-like disease ex vivo. Structural changes in the esophagus of wild-type (Tslpr(+/+) ) and mutant (Tslpr(-/-) ) mice with EoE-like disease were quantitatively evaluated and food impaction sites in the esophagus of diseased mice were monitored using OCT. Here, the capability of OCT as a label-free imaging tool devoid of tissue-processing artifacts to effectively characterize murine EoE-like disease models has been demonstrated.
Resumo:
Paper 1: Pilot study of Swiss firms Abstract Using a fixed effects approach, we investigate whether the presence of specific individuals on Swiss firms’ boards affects firm performance and the policy choices they make. We find evidence for a substantial impact of these directors’ presence on their firms. Moreover, the director effects are correlated across policies and performance measures but uncorrelated to the directors’ background. We find these results interesting but conclude that they should to be substantiated on a dataset that is larger and better understood by researchers. Also, further tests are required to rule out methodological concerns. Paper 2: Evidence from the S&P 1,500 Abstract We ask whether directors on corporate boards contribute to firm performance as individuals. From the universe of the S&P 1,500 firms since 1996 we track 2,062 directors who serve on multiple boards over extended periods of time. Our initial findings suggest that the presence of these directors is associated with substantial performance shifts (director fixed effects). Closer examination shows that these effects are statistical artifacts and we conclude that directors are largely fungible. Moreover, we contribute to the discussion of the fixed effects method. In particular, we highlight that the selection of the randomization method is pivotal when generating placebo benchmarks. Paper 3: Robustness, statistical power, and important directors Abstract This article provides a better understanding of Senn’s (2014) findings: The outcome that individual directors are unrelated to firm performance proves robust against different estimation models and testing strategies. By looking at CEOs, the statistical power of the placebo benchmarking test is evaluated. We find that only the stronger tests are able to detect CEO fixed effects. However, these tests are not suitable to analyze directors. The suitable tests would detect director effects if the inter quartile range of the true effects amounted to 3 percentage points ROA. As Senn (2014) finds no such effects for outside directors in general, we focus on groups of particularly important directors (e.g., COBs, non-busy directors, successful directors). Overall, our evidence suggests that the members of these groups are not individually associated with firm performance either. Thus, we confirm that individual directors are largely fungible. If the individual has an effect on performance, it is of small magnitude.
Resumo:
PURPOSE The goal of this study was to investigate whether different computed tomography (CT) energy levels could supply additional information for the differentiation of dental materials for forensic investigations. METHODS Nine different commonly used restorative dental materials were investigated in this study. A total of 75 human third molars were filled with the restorative dental materials and then scanned using the forensic reference phantom in singlesource mode. The mean Hounsfield unit values and standard deviations (SDs) of each material were calculated at 120, 80 and 140 kVp. RESULTS Most of the dental materials could be differentiated at 120 kVp. We found that greater X-ray density of a material resulted in higher SDs and that the material volume could influence the measurements. CONCLUSION Differentiation of dental materials in CT was possible in many cases using single-energy CT scans at 120 kVp. Because of the number of dental restorative materials available and scanner and scan parameter dependence, as well as the CT imaging artifacts, the identification (in contrast to differentiation) was problematic.
Resumo:
OBJECTIVE We sought to evaluate the feasibility of k-t parallel imaging for accelerated 4D flow MRI in the hepatic vascular system by investigating the impact of different acceleration factors. MATERIALS AND METHODS k-t GRAPPA accelerated 4D flow MRI of the liver vasculature was evaluated in 16 healthy volunteers at 3T with acceleration factors R = 3, R = 5, and R = 8 (2.0 × 2.5 × 2.4 mm(3), TR = 82 ms), and R = 5 (TR = 41 ms); GRAPPA R = 2 was used as the reference standard. Qualitative flow analysis included grading of 3D streamlines and time-resolved particle traces. Quantitative evaluation assessed velocities, net flow, and wall shear stress (WSS). RESULTS Significant scan time savings were realized for all acceleration factors compared to standard GRAPPA R = 2 (21-71 %) (p < 0.001). Quantification of velocities and net flow offered similar results between k-t GRAPPA R = 3 and R = 5 compared to standard GRAPPA R = 2. Significantly increased leakage artifacts and noise were seen between standard GRAPPA R = 2 and k-t GRAPPA R = 8 (p < 0.001) with significant underestimation of peak velocities and WSS of up to 31 % in the hepatic arterial system (p <0.05). WSS was significantly underestimated up to 13 % in all vessels of the portal venous system for k-t GRAPPA R = 5, while significantly higher values were observed for the same acceleration with higher temporal resolution in two veins (p < 0.05). CONCLUSION k-t acceleration of 4D flow MRI is feasible for liver hemodynamic assessment with acceleration factors R = 3 and R = 5 resulting in a scan time reduction of at least 40 % with similar quantitation of liver hemodynamics compared with GRAPPA R = 2.
Resumo:
Code clone detection helps connect developers across projects, if we do it on a large scale. The cornerstones that allow clone detection to work on a large scale are: (1) bad hashing (2) lightweight parsing using regular expressions and (3) MapReduce pipelines. Bad hashing means to determine whether or not two artifacts are similar by checking whether their hashes are identical. We show a bad hashing scheme that works well on source code. Lightweight parsing using regular expressions is our technique of obtaining entire parse trees from regular expressions, robustly and efficiently. We detail the algorithm and implementation of one such regular expression engine. MapReduce pipelines are a way of expressing a computation such that it can automatically and simply be parallelized. We detail the design and implementation of one such MapReduce pipeline that is efficient and debuggable. We show a clone detector that combines these cornerstones to detect code clones across all projects, across all versions of each project.