95 resultados para ARTIFACTS
Resumo:
Online reputation management deals with monitoring and influencing the online record of a person, an organization or a product. The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly have a disastrous influence on the online reputation of some of the entities. This dissertation can be split into three parts: In the first part, possible fuzzy clustering applications for the Social Semantic Web are investigated. The second part explores promising Social Semantic Web elements for organizational applications,while in the third part the former two parts are brought together and a fuzzy online reputation analysis framework is introduced and evaluated. Theentire PhD thesis is based on literature reviews as well as on argumentative-deductive analyses.The possible applications of Social Semantic Web elements within organizations have been researched using a scenario and an additional case study together with two ancillary case studies—based on qualitative interviews. For the conception and implementation of the online reputation analysis application, a conceptual framework was developed. Employing test installations and prototyping, the essential parts of the framework have been implemented.By following a design sciences research approach, this PhD has created two artifacts: a frameworkand a prototype as proof of concept. Bothartifactshinge on twocoreelements: a (cluster analysis-based) translation of tags used in the Social Web to a computer-understandable fuzzy grassroots ontology for the Semantic Web, and a (Topic Maps-based) knowledge representation system, which facilitates a natural interaction with the fuzzy grassroots ontology. This is beneficial to the identification of unknown but essential Web data that could not be realized through conventional online reputation analysis. Theinherent structure of natural language supports humans not only in communication but also in the perception of the world. Fuzziness is a promising tool for transforming those human perceptions intocomputer artifacts. Through fuzzy grassroots ontologies, the Social Semantic Web becomes more naturally and thus can streamline online reputation management.
Resumo:
Time series of geocenter coordinates were determined with data of two global navigation satellite systems (GNSSs), namely the U.S. GPS (Global Positioning System) and the Russian GLONASS (Global’naya Nawigatsionnaya Sputnikowaya Sistema). The data was recorded in the years 2008–2011 by a global network of 92 permanently observing GPS/GLONASS receivers. Two types of daily solutions were generated independently for each GNSS, one including the estimation of geocenter coordinates and one without these parameters. A fair agreement for GPS and GLONASS was found in the geocenter x- and y-coordinate series. Our tests, however, clearly reveal artifacts in the z-component determined with the GLONASS data. Large periodic excursions in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system and thus have a period of about 4 months (third of a year). A detailed analysis revealed that the artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). A simple formula is derived, describing the relation between the geocenter z-coordinate and the corresponding parameter of the SRP. The effect can be explained by first-order perturbation theory of celestial mechanics. The theory also predicts a heavy impact on the GNSS-derived geocenter if once-per-revolution SRP parameters are estimated in the direction of the satellite’s solar panel axis. Specific experiments using GPS observations revealed that this is indeed the case. Although the main focus of this article is on GNSS, the theory developed is applicable to all satellite observing techniques. We applied the theory to satellite laser ranging (SLR) solutions using LAGEOS. It turns out that the correlation between geocenter and SRP parameters is not a critical issue for the SLR solutions. The reasons are threefold: The direct SRP is about a factor of 30–40 smaller for typical geodetic SLR satellites than for GNSS satellites, allowing it in most cases to not solve for SRP parameters (ruling out the correlation between these parameters and the geocenter coordinates); the orbital arc length of 7 days (which is typically used in SLR analysis) contains more than 50 revolutions of the LAGEOS satellites as compared to about two revolutions of GNSS satellites for the daily arcs used in GNSS analysis; the orbit geometry is not as critical for LAGEOS as for GNSS satellites, because the elevation angle of the Sun w.r.t. the orbital plane is usually significantly changing over 7 days.
Resumo:
Synaesthesia denotes a condition of remarkable individual differences in experience characterized by specific additional experiences in response to normal sensory input. Synaesthesia seems to (i) run in families which suggests a genetic component, (ii) is associated with marked structural and functional neural differences, and (iii) is usually reported to exist from early childhood. Hence, synaesthesia is generally regarded as a congenital phenomenon. However, most synaesthetic experiences are triggered by cultural artifacts (e.g., letters, musical sounds). Evidence exists to suggest that synaesthetic experiences are triggered by the conceptual representation of their inducer stimuli. Cases were identified for which the specific synaesthetic associations are related to prior experiences and large scale studies show that grapheme-color associations in synaesthesia are not completely random. Hence, a learning component is inherently involved in the development of specific synaesthetic associations. Researchers have hypothesized that associative learning is the critical mechanism. Recently, it has become of scientific and public interest if synaesthetic experiences may be acquired by means of associative training procedures and whether the gains of these trainings are associated with similar cognitive benefits as genuine synaesthetic experiences. In order to shed light on these issues and inform synaesthesia researchers and the general interested public alike, we provide a comprehensive literature review on developmental aspects of synaesthesia and specific training procedures in non-synaesthetes. Under the light of a clear working definition of synaesthesia, we come to the conclusion that synaesthesia can potentially be learned by the appropriate training.
Resumo:
Currently, the contributions of Starlette, Stella, and AJISAI are not taken into account when defining the International Terrestrial Reference Frame (ITRF), despite the large amount of data collected in a long time-span. Consequently, the SLR-derived parameters and the SLR part of the ITRF are almost exclusively defined by LAGEOS-1 and LAGEOS-2. We investigate the potential of combining the observations to several SLR satellites with different orbital characteristics. Ten years of SLR data are homogeneously processed using the development version 5.3 of the Bernese GNSS Software. Special emphasis is put on orbit parameterization and the impact of LEO data on the estimation of the geocenter coordinates, Earth rotation parameters, Earth gravity field coefficients, and the station coordinates in one common adjustment procedure. We find that the parameters derived from the multi-satellite solutions are of better quality than those obtained in single satellite solutions or solutions based on the two LAGEOS satellites. A spectral analysis of the SLR network scale w.r.t. SLRF2008 shows that artifacts related to orbit perturbations in the LAGEOS-1/2 solutions, i.e., periods related to the draconitic years of the LAGEOS satellites, are greatly reduced in the combined solutions.
Resumo:
BACKGROUND Multidetector computed tomography (MDCT) may be useful to identify patients with patent foramen ovale (PFO). The aim of this study was to analyze whether a MDCT performed before pulmonary vein isolation reliably detects a PFO that may be used for access to the left atrium. METHODS AND RESULTS In 79 consecutive patients, who were referred for catheter ablation of symptomatic paroxysmal or persistent atrial fibrillation (AF), the presence of a PFO was explored by MDCT and transesophageal echocardiography (TEE). TEE was considered as the gold standard, and quality of TEE was good in all patients. In 16 patients (20.3%), MDCT could not be used for analysis because of artifacts, mainly because of AF. On TEE, a PFO was found in 15 (23.8%) of the 63 patients with usable MDCT. MDCT detected six PFO of which four were present on TEE. This corresponded to a sensitivity of 26.7%, a specificity of 95.8%, a negative predictive value of 80.7%, and a positive predictive value of 66.7%. The receiver operating characteristics curve of MDCT for the detection of PFO was 0.613 (95% confidence interval 0.493-0.732). CONCLUSIONS MDCT may detect a PFO before pulmonary isolation. However, presence of AF may lead to artifacts on MDCT impeding a meaningful analysis. Furthermore, in this study sensitivity and positive predictive value of MDCT were low and therefore MDCT was not a reliable screening tool for detection of PFO.
Resumo:
In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.
Resumo:
A close to native structure of bulk biological specimens can be imaged by cryo-electron microscopy of vitreous sections (CEMOVIS). In some cases structural information can be combined with X-ray data leading to atomic resolution in situ. However, CEMOVIS is not routinely used. The two critical steps consist of producing a frozen section ribbon of a few millimeters in length and transferring the ribbon onto an electron microscopy grid. During these steps, the first sections of the ribbon are wrapped around an eyelash (unwrapping is frequent). When a ribbon is sufficiently attached to the eyelash, the operator must guide the nascent ribbon. Steady hands are required. Shaking or overstretching may break the ribbon. In turn, the ribbon immediately wraps around itself or flies away and thereby becomes unusable. Micromanipulators for eyelashes and grids as well as ionizers to attach section ribbons to grids were proposed. The rate of successful ribbon collection, however, remained low for most operators. Here we present a setup composed of two micromanipulators. One of the micromanipulators guides an electrically conductive fiber to which the ribbon sticks with unprecedented efficiency in comparison to a not conductive eyelash. The second micromanipulator positions the grid beneath the newly formed section ribbon and with the help of an ionizer the ribbon is attached to the grid. Although manipulations are greatly facilitated, sectioning artifacts remain but the likelihood to investigate high quality sections is significantly increased due to the large number of sections that can be produced with the reported tool.
Resumo:
Previous studies have either exclusively used annual tree-ring data or have combined tree-ring series with other, lower temporal resolution proxy series. Both approaches can lead to significant uncertainties, as tree-rings may underestimate the amplitude of past temperature variations, and the validity of non-annual records cannot be clearly assessed. In this study, we assembled 45 published Northern Hemisphere (NH) temperature proxy records covering the past millennium, each of which satisfied 3 essential criteria: the series must be of annual resolution, span at least a thousand years, and represent an explicit temperature signal. Suitable climate archives included ice cores, varved lake sediments, tree-rings and speleothems. We reconstructed the average annual land temperature series for the NH over the last millennium by applying 3 different reconstruction techniques: (1) principal components (PC) plus second-order autoregressive model (AR2), (2) composite plus scale (CPS) and (3) regularized errors-in-variables approach (EIV). Our reconstruction is in excellent agreement with 6 climate model simulations (including the first 5 models derived from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and an earth system model of intermediate complexity (LOVECLIM), showing similar temperatures at multi-decadal timescales; however, all simulations appear to underestimate the temperature during the Medieval Warm Period (MWP). A comparison with other NH reconstructions shows that our results are consistent with earlier studies. These results indicate that well-validated annual proxy series should be used to minimize proxy-based artifacts, and that these proxy series contain sufficient information to reconstruct the low-frequency climate variability over the past millennium.
Resumo:
We present a case where multi-phase post-mortem computed tomography angiography (PMCTA) induced a hemorrhagic pericardial effusion during the venous phase of angiography. Post-mortem non-contrast CT (PMCT) suggested the presence of a ruptured aortic dissection. This diagnosis was confirmed by PMCTA after pressure controlled arterial injection of contrast. During the second phase of multi-phase PMCTA the presence of contrast leakage from the inferior cava vein into the pericardial sac was noted. Autopsy confirmed the post-mortem nature of this vascular tear. This case teaches us an important lesson: it underlines the necessity to critically analyze PMCT and PMCTA images in order to distinguish between artifacts, true pathologies and iatrogenic findings. In cases with ambiguous findings such as the case reported here, correlation of imaging findings with autopsy is elementary.
Resumo:
BACKGROUND: To investigate if non-rigid image-registration reduces motion artifacts in triggered and non-triggered diffusion tensor imaging (DTI) of native kidneys. A secondary aim was to determine, if improvements through registration allow for omitting respiratory-triggering. METHODS: Twenty volunteers underwent coronal DTI of the kidneys with nine b-values (10-700 s/mm2 ) at 3 Tesla. Image-registration was performed using a multimodal nonrigid registration algorithm. Data processing yielded the apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA). For comparison of the data stability, the root mean square error (RMSE) of the fitting and the standard deviations within the regions of interest (SDROI ) were evaluated. RESULTS: RMSEs decreased significantly after registration for triggered and also for non-triggered scans (P < 0.05). SDROI for ADC, FA, and FP were significantly lower after registration in both medulla and cortex of triggered scans (P < 0.01). Similarly the SDROI of FA and FP decreased significantly in non-triggered scans after registration (P < 0.05). RMSEs were significantly lower in triggered than in non-triggered scans, both with and without registration (P < 0.05). CONCLUSION: Respiratory motion correction by registration of individual echo-planar images leads to clearly reduced signal variations in renal DTI for both triggered and particularly non-triggered scans. Secondarily, the results suggest that respiratory-triggering still seems advantageous.J. Magn. Reson. Imaging 2014. (c) 2014 Wiley Periodicals, Inc.
Resumo:
To investigate the effect of metal implants in proton radiotherapy, dose distributions of different, clinically relevant treatment plans have been measured in an anthropomorphic phantom and compared to treatment planning predictions. The anthropomorphic phantom, which is sliced into four segments in the cranio-caudal direction, is composed of tissue equivalent materials and contains a titanium implant in a vertebral body in the cervical region. GafChromic® films were laid between the different segments to measure the 2D delivered dose. Three different four-field plans have then been applied: a Single-Field-Uniform-Dose (SFUD) plan, both with and without artifact correction implemented, and an Intensity-Modulated-Proton-Therapy (IMPT) plan with the artifacts corrected. For corrections, the artifacts were manually outlined and the Hounsfield Units manually set to an average value for soft tissue. Results show a surprisingly good agreement between prescribed and delivered dose distributions when artifacts have been corrected, with > 97% and 98% of points fulfilling the gamma criterion of 3%/3 mm for both SFUD and the IMPT plans, respectively. In contrast, without artifact corrections, up to 18% of measured points fail the gamma criterion of 3%/3 mm for the SFUD plan. These measurements indicate that correcting manually for the reconstruction artifacts resulting from metal implants substantially improves the accuracy of the calculated dose distribution.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Resumo:
Procurement of fresh tissue of prostate cancer is critical for biobanking and generation of xenograft models as an important preclinical step towards new therapeutic strategies in advanced prostate cancer. However, handling of fresh radical prostatectomy specimens has been notoriously challenging given the distinctive physical properties of prostate tissue and the difficulty to identify cancer foci on gross examination. Here, we have developed a novel approach using ceramic foam plates for processing freshly cut whole mount sections from radical prostatectomy specimens without compromising further diagnostic assessment. Forty-nine radical prostatectomy specimens were processed and sectioned from the apex to the base in whole mount slices. Putative carcinoma foci were morphologically verified by frozen section analysis. The fresh whole mount slices were then laid between two ceramic foam plates and fixed overnight. To test tissue preservation after this procedure, formalin-fixed and paraffin-embedded whole mount sections were stained with hematoxylin and eosin (H&E) and analyzed by immunohistochemistry, fluorescence, and silver in situ hybridization (FISH and SISH, respectively). There were no morphological artifacts on H&E stained whole mount sections from slices that had been fixed between two plates of ceramic foam, and the histological architecture was fully retained. The quality of immunohistochemistry, FISH, and SISH was excellent. Fixing whole mount tissue slices between ceramic foam plates after frozen section examination is an excellent method for processing fresh radical prostatectomy specimens, allowing for a precise identification and collection of fresh tumor tissue without compromising further diagnostic analysis.