869 resultados para artifacts
Resumo:
Combined EEG/fMRI recordings offer a promising opportunity to detect brain areas with altered BOLD signal during interictal epileptic discharges (IEDs). These areas are likely to represent the irritative zone, which is itself a reflection of the epileptogenic zone. This paper reports on the imaging findings using independent component analysis (ICA) to continuously quantify epileptiform activity in simultaneously acquired EEG and fMRI. Using ICA derived factors coding for the epileptic activity takes into account that epileptic activity is continuously fluctuating with each spike differing in amplitude, duration and maybe topography, including subthreshold epileptic activity besides clear IEDs and may thus increase the sensitivity and statistical power of combined EEG/fMRI in epilepsy. Twenty patients with different types of focal and generalized epilepsy syndromes were investigated. ICA separated epileptiform activity from normal physiological brain activity and artifacts. In 16/20 patients, BOLD correlates of epileptic activity matched the EEG sources, the clinical semiology, and, if present, the structural lesions. In clinically equivocal cases, the BOLD correlates aided to attribute proper diagnosis of the underlying epilepsy syndrome. Furthermore, in one patient with temporal lobe epilepsy, BOLD correlates of rhythmic delta activity could be employed to delineate the affected hippocampus. Compared to BOLD correlates of manually identified IEDs, the sensitivity was improved from 50% (10/20) to 80%. The ICA EEG/fMRI approach is a safe, non-invasive and easily applicable technique, which can be used to identify regions with altered hemodynamic effects related to IEDs as well as intermittent rhythmic discharges in different types of epilepsy.
Resumo:
Personal photographs permeate our lives from the moment we are born as they define who we are within our familial group and local communities. Archived in family albums or framed on living room walls, they continue on after our death as mnemonic artifacts referencing our gendered, raced, and ethnic identities. This dissertation examines salient instances of what women “do” with personal photographs, not only as authors and subjects but also as collectors, archivists, and family and cultural historians. This project seeks to contribute to more productive, complex discourse about how women form relationships and engage with the conventions and practices of personal photography. In the first part of this dissertation I revisit developments in the history of personal photography, including the advertising campaigns of the Kodak and Agfa Girls and the development of albums such as the Stammbuch and its predecessor, the carte-de-visite, that demonstrate how personal photography has functioned as a gendered activity that references family unity, sentimentalism for the past, and self-representation within normative familial and dominant cultural groups, thus suggesting its importance as a cultural practice of identity formation. The second and primary section of the dissertation expands on the critical analyses of Gillian Rose, Patricia Holland, and Nancy Martha West, who propose that personal photography, marketed to and taken on by women, double-exposes their gendered identities. Drawing on work by critics such as Deborah Willis, bell hooks, and Abigail Solomon-Godeau, I examine how the reconfiguration, recontextualization, and relocation of personal photographs in the respective work of Christine Saari, Fern Logan, and Katie Knight interrogates and complicates gendered, raced, and ethnic identities and cultural attitudes about them. In the final section of the dissertation I briefly examine select examples of how emerging digital spaces on the Internet function as a site for personal photography, one that both reinscribes traditional cultural formations while offering new opportunities for women for the display and audiencing of identities outside the family.
Resumo:
The objective of modern transmission electron microscopy (TEM) in life science is to observe biological structures in a state as close as possible to the living organism. TEM samples have to be thin and to be examined in vacuum; therefore only solid samples can be investigated. The most common and popular way to prepare samples for TEM is to subject them to chemical fixation, staining, dehydration, and embedding in a resin (all of these steps introduce considerable artifacts) before investigation. An alternative is to immobilize samples by cooling. High pressure freezing is so far the only approach to vitrify (water solidification without ice crystal formation) bulk biological samples of about 200 micrometer thick. This method leads to an improved ultrastructural preservation. After high pressure freezing, samples have to be subjected to follow-up procedure, such as freeze-substitution and embedding. The samples can also be sectioned into frozen hydrated sections and analyzed in a cryo-TEM. Also for immunocytochemistry, high pressure freezing is a good and practicable way.
Resumo:
Transmission electron microscopy has provided most of what is known about the ultrastructural organization of tissues, cells, and organelles. Due to tremendous advances in crystallography and magnetic resonance imaging, almost any protein can now be modeled at atomic resolution. To fully understand the workings of biological "nanomachines" it is necessary to obtain images of intact macromolecular assemblies in situ. Although the resolution power of electron microscopes is on the atomic scale, in biological samples artifacts introduced by aldehyde fixation, dehydration and staining, but also section thickness reduces it to some nanometers. Cryofixation by high pressure freezing circumvents many of the artifacts since it allows vitrifying biological samples of about 200 mum in thickness and immobilizes complex macromolecular assemblies in their native state in situ. To exploit the perfect structural preservation of frozen hydrated sections, sophisticated instruments are needed, e.g., high voltage electron microscopes equipped with precise goniometers that work at low temperature and digital cameras of high sensitivity and pixel number. With them, it is possible to generate high resolution tomograms, i.e., 3D views of subcellular structures. This review describes theory and applications of the high pressure cryofixation methodology and compares its results with those of conventional procedures. Moreover, recent findings will be discussed showing that molecular models of proteins can be fitted into depicted organellar ultrastructure of images of frozen hydrated sections. High pressure freezing of tissue is the base which may lead to precise models of macromolecular assemblies in situ, and thus to a better understanding of the function of complex cellular structures.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.
Resumo:
The field of archaeology and that of metallurgy appear to be widely separated and in no way related. Work done in recent years, however, tends to show that, in many ways, the metallurgist can supplement and enhance the information gained by the archaeologist, at least in regard to those objects which have been made of metal.
Resumo:
A distinguishing feature of the discipline of archaeology is its reliance upon sensory dependant investigation. As perceived by all of the senses, the felt environment is a unique area of archaeological knowledge. It is generally accepted that the emergence of industrial processes in the recent past has been accompanied by unprecedented sonic extremes. The work of environmental historians has provided ample evidence that the introduction of much of this unwanted sound, or "noise" was an area of contestation. More recent research in the history of sound has called for more nuanced distinctions than the noisy/quiet dichotomy. Acoustic archaeology tends to focus upon a reconstruction of sound producing instruments and spaces with a primary goal of ascertaining intentionality. Most archaeoacoustic research is focused on learning more about the sonic world of people within prehistoric timeframes while some research has been done on historic sites. In this thesis, by way of a meditation on industrial sound and the physical remains of the Quincy Mining Company blacksmith shop (Hancock, MI) in particular, I argue for an acceptance and inclusion of sound as artifact in and of itself. I am introducing the concept of an individual sound-form, or sonifact, as a reproducible, repeatable, representable physical entity, created by tangible, perhaps even visible, host-artifacts. A sonifact is a sound that endures through time, with negligible variability. Through the piecing together of historical and archaeological evidence, in this thesis I present a plausible sonifactual assemblage at the blacksmith shop in April 1916 as it may have been experienced by an individual traversing the vicinity on foot: an 'historic soundwalk.' The sensory apprehension of abandoned industrial sites is multi-faceted. In this thesis I hope to make the case for an acceptance of sound as a primary heritage value when thinking about the industrial past, and also for an increased awareness and acceptance of sound and listening as a primary mode of perception.
Resumo:
The purpose of this thesis is to analyze the evolution of an early 20th century mining system in Spitsbergen as applied by Boston-based Arctic Coal Company (ACC). This analysis will address the following questions: Did the system evolve in a linear, technological-based fashion? Or was the progression more a product of interactions and negotiations with the natural and human landscapes present during the time of occupation? Answers to these questions will be sought through review of historical records and material residues identified during the 2008 field examination on Spitsbergen. The Arctic Coal Company’s flagship mine, ACC Mine No. 1, will serve as the focus for this analysis. The mine was the company’s largest undertaking during its occupation of Longyear Valley and today exhibits a large collection of related features and artifacts. The study will emphasize on the material record within an analysis of technical, environmental and social influences that guided the course of the mining system. The intent of this thesis is a better understanding of how a particular resource extraction industry took root in the Arctic.
Resumo:
Intestinal intraepithelial lymphocytes (IEL) are specialized subsets of T cells with distinct functional capacities. While some IEL subsets are circulating, others such as CD8alphaalpha TCRalphabeta IEL are believed to represent non-circulating resident T cell subsets [Sim, G.K., Intraepithelial lymphocytes and the immune system. Adv. Immunol., 1995. 58: 297-343.]. Current methods to obtain enriched preparations of intraepithelial lymphocytes are mostly based on Percoll density gradient or magnetic bead-based technologies [Lundqvist, C., et al., Isolation of functionally active intraepithelial lymphocytes and enterocytes from human small and large intestine. J. Immunol. Methods, 1992. 152(2): 253-263.]. However, these techniques are hampered by a generally low yield of isolated cells, and potential artifacts due to the interference of the isolation procedure with subsequent functional assays, in particular, when antibodies against cell surface markers are required. Here we describe a new method for obtaining relatively pure populations of intestinal IEL (55-75%) at a high yield (>85%) by elutriation centrifugation. This technique is equally suited for the isolation and enrichment of intraepithelial lymphocytes of both mouse and human origin. Time requirements for fractionating cell suspensions by elutriation centrifugation are comparable to Percoll-, or MACS-based isolation procedures. Hence, the substantially higher yield and the consistent robust enrichment for intraepithelial lymphocytes, together with the gentle treatment of the cells during elutriation that does not interfere with subsequent functional assays, are important aspects that are in favor of using this elegant technology to obtain unmanipulated, unbiased populations of intestinal intraepithelial lymphocytes, and, if desired, also of pure epithelial cells.
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.
Resumo:
Femoroacetabular impingements (FAI) are due to an anatomical disproportion between the proximal femur and the acetabulum which causes premature wear of the joint surfaces. An operation is often necessary in order to relieve symptoms such as limited movement and pain as well as to prevent or slow down the degenerative process. The result is dependent on the preoperative status of the joint with poor results for advanced arthritis of the hip joint. This explains the necessity for an accurate diagnosis in order to recognize early stages of damage to the joint. The diagnosis of FAI includes clinical examination, X-ray examination and magnetic resonance imaging (MRI). The standard X-radiological examination for FAI is carried out using two X-ray images, an anterior-posterior view of the pelvis and a lateral view of the proximal femur, such as the cross-table lateral or Lauenstein projections. It is necessary that positioning criteria are adhered to in order to avoid distortion artifacts. MRI permits an examination of the pelvis on three levels and should also include radial planned sequences for improved representation of peripheral structures, such as the labrum and peripheral cartilage. The use of contrast medium for a direct MR arthrogram has proved to be advantageous particularly for representation of labrum damage. The data with respect to cartilage imaging are still unclear. Further developments in technology, such as biochemical-sensitive MRI applications, will be able to improve the diagnosis of the pelvis in the near future.
Resumo:
INTRODUCTION: Cartilage defects are common pathologies and surgical cartilage repair shows promising results. In its postoperative evaluation, the magnetic resonance observation of cartilage repair tissue (MOCART) score, using different variables to describe the constitution of the cartilage repair tissue and the surrounding structures, is widely used. High-field magnetic resonance imaging (MRI) and 3-dimensional (3D) isotropic sequences may combine ideal preconditions to enhance the diagnostic performance of cartilage imaging.Aim of this study was to introduce an improved 3D MOCART score using the possibilities of an isotropic 3D true fast imaging with steady-state precession (True-FISP) sequence in the postoperative evaluation of patients after matrix-associated autologous chondrocyte transplantation (MACT) as well as to compare the results to the conventional 2D MOCART score using standard MR sequences. MATERIAL AND METHODS: The study had approval by the local ethics commission. One hundred consecutive MR scans in 60 patients at standard follow-up intervals of 1, 3, 6, 12, 24, and 60 months after MACT of the knee joint were prospectively included. The mean follow-up interval of this cross-sectional evaluation was 21.4 +/- 20.6 months; the mean age of the patients was 35.8 +/- 9.4 years. MRI was performed at a 3.0 Tesla unit. All variables of the standard 2D MOCART score where part of the new 3D MOCART score. Furthermore, additional variables and options were included with the aims to use the capabilities of isotropic MRI, to include the results of recent studies, and to adapt to the needs of patients and physician in a clinical routine examination. A proton-density turbo spin-echo sequence, a T2-weighted dual fast spin-echo (dual-FSE) sequence, and a T1-weighted turbo inversion recovery magnitude (TIRM) sequence were used to assess the standard 2D MOCART score; an isotropic 3D-TrueFISP sequence was prepared to evaluate the new 3D MOCART score. All 9 variables of the 2D MOCART score were compared with the corresponding variables obtained by the 3D MOCART score using the Pearson correlation coefficient; additionally the subjective quality and possible artifacts of the MR sequences were analyzed. RESULTS: The correlation between the standard 2D MOCART score and the new 3D MOCART showed for the 8 variables "defect fill," "cartilage interface," "surface," "adhesions," "structure," "signal intensity," "subchondral lamina," and "effusion"-a highly significant (P < 0.001) correlation with a Pearson coefficient between 0.566 and 0.932. The variable "bone marrow edema" correlated significantly (P < 0.05; Pearson coefficient: 0.257). The subjective quality of the 3 standard MR sequences was comparable to the isotropic 3D-TrueFISP sequence. Artifacts were more frequently visible within the 3D-TrueFISP sequence. CONCLUSION: In the clinical routine follow-up after cartilage repair, the 3D MOCART score, assessed by only 1 high-resolution isotropic MR sequence, provides comparable information than the standard 2D MOCART score. Hence, the new 3D MOCART score has the potential to combine the information of the standard 2D MOCART score with the possible advantages of isotropic 3D MRI at high-field. A clear limitation of the 3D-TrueFISP sequence was the high number of artifacts. Future studies have to prove the clinical benefits of a 3D MOCART score.
Resumo:
Recent brain imaging work has expanded our understanding of the mechanisms of perceptual, cognitive, and motor functions in human subjects, but research into the cerebral control of emotional and motivational function is at a much earlier stage. Important concepts and theories of emotion are briefly introduced, as are research designs and multimodal approaches to answering the central questions in the field. We provide a detailed inspection of the methodological and technical challenges in assessing the cerebral correlates of emotional activation, perception, learning, memory, and emotional regulation behavior in healthy humans. fMRI is particularly challenging in structures such as the amygdala as it is affected by susceptibility-related signal loss, image distortion, physiological and motion artifacts and colocalized Resting State Networks (RSNs). We review how these problems can be mitigated by using optimized echo-planar imaging (EPI) parameters, alternative MR sequences, and correction schemes. High-quality data can be acquired rapidly in these problematic regions with gradient compensated multiecho EPI or high resolution EPI with parallel imaging and optimum gradient directions, combined with distortion correction. Although neuroimaging studies of emotion encounter many difficulties regarding the limitations of measurement precision, research design, and strategies of validating neuropsychological emotion constructs, considerable improvement in data quality and sensitivity to subtle effects can be achieved. The methods outlined offer the prospect for fMRI studies of emotion to provide more sensitive, reliable, and representative models of measurement that systematically relate the dynamics of emotional regulation behavior with topographically distinct patterns of activity in the brain. This will provide additional information as an aid to assessment, categorization, and treatment of patients with emotional and personality disorders.
Resumo:
Few real software systems are built completely from scratch nowadays. Instead, systems are built iteratively and incrementally, while integrating and interacting with components from many other systems. Adaptation, reconfiguration and evolution are normal, ongoing processes throughout the lifecycle of a software system. Nevertheless the platforms, tools and environments we use to develop software are still largely based on an outmoded model that presupposes that software systems are closed and will not significantly evolve after deployment. We claim that in order to enable effective and graceful evolution of modern software systems, we must make these systems more amenable to change by (i) providing explicit, first-class models of software artifacts, change, and history at the level of the platform, (ii) continuously analysing static and dynamic evolution to track emergent properties, and (iii) closing the gap between the domain model and the developers' view of the evolving system. We outline our vision of dynamic, evolving software systems and identify the research challenges to realizing this vision.