980 resultados para Mapping time
Resumo:
As the elastic response of cell membranes to mechanical stimuli plays a key role in various cellular processes, novel biophysical strategies to quantify the elasticity of native membranes under physiological conditions at a nanometer scale are gaining interest. In order to investigate the elastic response of apical membranes, elasticity maps of native membrane sheets, isolated from MDCK II (Madine Darby Canine kidney strain II) epithelial cells, were recorded by local indentation with an Atomic Force Microscope (AFM). To exclude the underlying substrate effect on membrane indentation, a highly ordered gold coated porous array with a pore diameter of 1.2 μm was used to support apical membranes. Overlays of fluorescence and AFM images show that intact apical membrane sheets are attached to poly-D-lysine coated porous substrate. Force indentation measurements reveal an extremely soft elastic membrane response if it is indented at the center of the pore in comparison to a hard repulsion on the adjacent rim used to define the exact contact point. A linear dependency of force versus indentation (-dF/dh) up to 100 nm penetration depth enabled us to define an apparent membrane spring constant (kapp) as the slope of a linear fit with a stiffness value of for native apical membrane in PBS. A correlation between fluorescence intensity and kapp is also reported. Time dependent hysteresis observed with native membranes is explained by a viscoelastic solid model of a spring connected to a Kelvin-Voight solid with a time constant of 0.04 s. No hysteresis was reported with chemically fixated membranes. A combined linear and non linear elastic response is suggested to relate the experimental data of force indentation curves to the elastic modulus and the membrane thickness. Membrane bending is the dominant contributor to linear elastic indentation at low loads, whereas stretching is the dominant contributor for non linear elastic response at higher loads. The membrane elastic response was controlled either by stiffening with chemical fixatives or by softening with F-actin disrupters. Overall, the presented setup is ideally suitable to study the interactions of the apical membrane with the underlying cytoskeleton by means of force indentation elasticity maps combined with fluorescence imaging.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
Purpose: To prospectively determine on T2 cartilage maps the effect of unloading during a clinical magnetic resonance (MR) examination in the postoperative follow-up of patients after matrix-associated autologous chondrocyte transplantation (MACT) of the knee joint. Materials and Methods: Ethical approval for this study was provided by the local ethics commission, and written informed consent was obtained. Thirty patients (mean age, 35.4 years +/- 10.5) with a mean postoperative follow-up period of 29.1 months +/- 24.4 were enrolled. A multiecho spin-echo T2-weighted sequence was performed at the beginning (early unloading) and end (late unloading) of the MR examination, with an interval of 45 minutes. Mean and zonal region of interest T2 measurements were obtained in control cartilage and cartilage repair tissue. Statistical analysis of variance was performed. Results: The change in T2 values of control cartilage (early unloading, 50.2 msec +/- 8.4; late unloading, 51.3 msec +/- 8.5) was less pronounced than the change in T2 values of cartilage repair tissue (early unloading, 51.8 msec +/- 11.7; late unloading, 56.1 msec +/- 14.4) (P = .024). The difference between control cartilage and cartilage repair tissue was not significant for early unloading (P = .314) but was significant for late unloading (P = .036). Zonal T2 measurements revealed a higher dependency on unloading for the superficial cartilage layer. Conclusion: Our results suggest that T2 relaxation can be used to assess early and late unloading values of articular cartilage in a clinical setting and that the time point of the quantitative T2 measurement affects the differentiation between native and abnormal articular cartilage. (c) RSNA, 2010.
Resumo:
To assess, compare and correlate quantitative T2 and T2* relaxation time measurements of intervertebral discs (IVDs) in patients suffering from low back pain, with respect to the IVD degeneration as assessed by the morphological Pfirrmann Score. Special focus was on the spatial variation of T2 and T2* between the annulus fibrosus (AF) and the nucleus pulposus (NP).
Resumo:
Granger causality (GC) is a statistical technique used to estimate temporal associations in multivariate time series. Many applications and extensions of GC have been proposed since its formulation by Granger in 1969. Here we control for potentially mediating or confounding associations between time series in the context of event-related electrocorticographic (ECoG) time series. A pruning approach to remove spurious connections and simultaneously reduce the required number of estimations to fit the effective connectivity graph is proposed. Additionally, we consider the potential of adjusted GC applied to independent components as a method to explore temporal relationships between underlying source signals. Both approaches overcome limitations encountered when estimating many parameters in multivariate time-series data, an increasingly common predicament in today's brain mapping studies.
Resumo:
As the performance gap between microprocessors and memory continues to increase, main memory accesses result in long latencies which become a factor limiting system performance. Previous studies show that main memory access streams contain significant localities and SDRAM devices provide parallelism through multiple banks and channels. These locality and parallelism have not been exploited thoroughly by conventional memory controllers. In this thesis, SDRAM address mapping techniques and memory access reordering mechanisms are studied and applied to memory controller design with the goal of reducing observed main memory access latency. The proposed bit-reversal address mapping attempts to distribute main memory accesses evenly in the SDRAM address space to enable bank parallelism. As memory accesses to unique banks are interleaved, the access latencies are partially hidden and therefore reduced. With the consideration of cache conflict misses, bit-reversal address mapping is able to direct potential row conflicts to different banks, further improving the performance. The proposed burst scheduling is a novel access reordering mechanism, which creates bursts by clustering accesses directed to the same rows of the same banks. Subjected to a threshold, reads are allowed to preempt writes and qualified writes are piggybacked at the end of the bursts. A sophisticated access scheduler selects accesses based on priorities and interleaves accesses to maximize the SDRAM data bus utilization. Consequentially burst scheduling reduces row conflict rate, increasing and exploiting the available row locality. Using a revised SimpleScalar and M5 simulator, both techniques are evaluated and compared with existing academic and industrial solutions. With SPEC CPU2000 benchmarks, bit-reversal reduces the execution time by 14% on average over traditional page interleaving address mapping. Burst scheduling also achieves a 15% reduction in execution time over conventional bank in order scheduling. Working constructively together, bit-reversal and burst scheduling successfully achieve a 19% speedup across simulated benchmarks.
Resumo:
The delayed Gadolinium Enhanced MRI of Cartilage (dGEMRIC) technique has shown promising results in pilot clinical studies of early osteoarthritis. Currently, its broader acceptance is limited by the long scan time and the need for postprocessing to calculate the T1 maps. A fast T1 mapping imaging technique based on two spoiled gradient echo images was implemented. In phantom studies, an appropriate flip angle combination optimized for center T1 of 756 to 955 ms yielded excellent agreement with T1 measured using the inversion recovery technique in the range of 200 to 900 ms, of interest in normal and diseased cartilage. In vivo validation was performed by serially imaging 26 hips using the inversion recovery and the Fast 2 angle T1 mapping techniques (center T1 756 ms). Excellent correlation with Pearson correlation coefficient R2 of 0.74 was seen and Bland-Altman plots demonstrated no systematic bias.
Resumo:
Larger body parts are somatotopically represented in the primary motor cortex (M1), while smaller body parts, such as the fingers, have partially overlapping representations. The principles that govern the overlapping organization of M1 remain unclear. We used transcranial magnetic stimulation (TMS) to examine the cortical encoding of thumb movements in M1 of healthy humans. We performed M1 mapping of the probability of inducing a thumb movement in a particular direction and used low intensity TMS to disturb a voluntary thumb movement in the same direction during a reaction time task. With both techniques we found spatially segregated representations of the direction of TMS-induced thumb movements, thumb flexion and extension being best separated. Furthermore, the cortical regions corresponding to activation of a thumb muscle differ, depending on whether the muscle functions as agonist or as antagonist for flexion or extension. In addition, we found in the reaction time experiment that the direction of a movement is processed in M1 before the muscles participating in it are activated. It thus appears that one of the organizing principles for the human corticospinal motor system is based on a spatially segregated representation of movement directions and that the representation of individual somatic structures, such as the hand muscles, overlap.
Resumo:
PURPOSE: To determine the feasibility of using a high resolution isotropic three-dimensional (3D) fast T1 mapping sequence for delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) to assess osteoarthritis in the hip. MATERIALS AND METHODS: T1 maps of the hip were acquired using both low and high resolution techniques following the administration of 0.2 mmol/kg Gd-DTPA(2-) in 35 patients. Both T1 maps were generated from two separate spoiled GRE images. The high resolution T1 map was reconstructed in the anatomically equivalent plane as the low resolution map. T1 values from the equivalent anatomic regions containing femoral and acetabular cartilages were measured on the low and high resolution maps and compared using regression analysis. RESULTS: In vivo T1 measurements showed a statistically significant correlation between the low and high resolution acquisitions at 1.5 Tesla (R(2) = 0.958, P < 0.001). These results demonstrate the feasibility of using a fast two-angle T1 mapping (F2T1) sequence with isotropic spatial resolution (0.8 x 0.8 x 0.8 mm) for quantitative assessment of biochemical status in articular cartilage of the hip. CONCLUSION: The high resolution 3D F2T1 sequence provides accurate T1 measurements in femoral and acetabular cartilages of the hip, which enables the biochemical assessment of articular cartilage in any plane through the joint. It is a powerful tool for researchers and clinicians to acquire high resolution data in a reasonable scan time (< 30 min).
Resumo:
Within the scope of a comprehensive assessment of the degree of soil erosion in Switzerland, common methods have been used in the past including test plot measurements, artificial rainfall simulation, and erosion modelling. In addition, mapping guidelines for all visible erosion features have been developed since the 1970s and are being successfully applied in many research and soil conservation projects. Erosion damage has been continuously mapped over a period of 9 years in a test region in the central Bernese plateau. In 2005, two additional study areas were added. The present paper assesses the data gathered and provides a comparison of the three study areas within a period of one year (from October 2005 to October 2006), focusing on the on-site impacts of soil erosion. During this period, about 11 erosive rainfall events occurred. Average soil loss rates mapped at each study site amounted to 0.7 t ha-1, 1.2 t ha-1 and 2.3 t ha-1, respectively. About one fourth of the total arable land showed visible erosion damage. Maximum soil losses of about 70 t ha-1 occurred on individual farm plots. Average soil erosion patterns are widely used to underline the severity of an erosion problem (e.g. impacts on water bodies). But since severe rainfall events, wheel tracks, headlands, and other “singularities” often cause high erosion rates, analysis of extreme erosion patterns such as maximum values led to a more differentiated understanding and appropriate conclusions for planning and design of soil protection measures. The study contains an assessment of soil erosion in Switzerland, emphasizing questions about extent, frequency and severity. At the same time, the effects of different types of land management are investigated in the field, aiming at the development of meaningful impact indicators of (un-)sustainable agriculture/soil erosion risk as well as the validation of erosion models. The results illustrate that conservation agriculture including no-till, strip tillage and in-mulch seeding plays an essential role in reducing soil loss as compared to conventional tillage.
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.
Resumo:
OBJECTIVES We sought to analyze the time course of atrial fibrillation (AF) episodes before and after circular plus linear left atrial ablation and the percentage of patients with complete freedom from AF after ablation by using serial seven-day electrocardiograms (ECGs). BACKGROUND The curative treatment of AF targets the pathophysiological corner stones of AF (i.e., the initiating triggers and/or the perpetuation of AF). The pathophysiological complexity of both may not result in an "all-or-nothing" response but may modify number and duration of AF episodes. METHODS In patients with highly symptomatic AF, circular plus linear ablation lesions were placed around the left and right pulmonary veins, between the two circles, and from the left circle to the mitral annulus using the electroanatomic mapping system. Repetitive continuous 7-day ECGs administered before and after catheter ablation were used for rhythm follow-up. RESULTS In 100 patients with paroxysmal (n = 80) and persistent (n = 20) AF, relative duration of time spent in AF significantly decreased over time (35 +/- 37% before ablation, 26 +/- 41% directly after ablation, and 10 +/- 22% after 12 months). Freedom from AF stepwise increased in patients with paroxysmal AF and after 12 months measured at 88% or 74% depending on whether 24-h ECG or 7-day ECG was used. Complete pulmonary vein isolation was demonstrated in <20% of the circular lesions. CONCLUSIONS The results obtained in patients with AF treated with circular plus linear left atrial lesions strongly indicate that substrate modification is the main underlying pathophysiologic mechanism and that it results in a delayed cure instead of an immediate cure.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
Inappropriate response tendencies may be stopped via a specific fronto/basal ganglia/primary motor cortical network. We sought to characterize the functional role of two regions in this putative stopping network, the right inferior frontal gyrus (IFG) and the primary motor cortex (M1), using electocorticography from subdural electrodes in four patients while they performed a stop-signal task. On each trial, a motor response was initiated, and on a minority of trials a stop signal instructed the patient to try to stop the response. For each patient, there was a greater right IFG response in the beta frequency band ( approximately 16 Hz) for successful versus unsuccessful stop trials. This finding adds to evidence for a functional network for stopping because changes in beta frequency activity have also been observed in the basal ganglia in association with behavioral stopping. In addition, the right IFG response occurred 100-250 ms after the stop signal, a time range consistent with a putative inhibitory control process rather than with stop-signal processing or feedback regarding success. A downstream target of inhibitory control is M1. In each patient, there was alpha/beta band desynchronization in M1 for stop trials. However, the degree of desynchronization in M1 was less for successfully than unsuccessfully stopped trials. This reduced desynchronization on successful stop trials could relate to increased GABA inhibition in M1. Together with other findings, the results suggest that behavioral stopping is implemented via synchronized activity in the beta frequency band in a right IFG/basal ganglia network, with downstream effects on M1.