978 resultados para High-resolution EEG


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The screening of testosterone (T) misuse for doping control is based on the urinary steroid profile, including T, its precursors and metabolites. Modifications of individual levels and ratio between those metabolites are indicators of T misuse. In the context of screening analysis, the most discriminant criterion known to date is based on the T glucuronide (TG) to epitestosterone glucuronide (EG) ratio (TG/EG). Following the World Anti-Doping Agency (WADA) recommendations, there is suspicion of T misuse when the ratio reaches 4 or beyond. While this marker remains very sensitive and specific, it suffers from large inter-individual variability, with important influence of enzyme polymorphisms. Moreover, use of low dose or topical administration forms makes the screening of endogenous steroids difficult while the detection window no longer suits the doping habit. As reference limits are estimated on the basis of population studies, which encompass inter-individual and inter-ethnic variability, new strategies including individual threshold monitoring and alternative biomarkers were proposed to detect T misuse. The purpose of this study was to evaluate the potential of ultra-high pressure liquid chromatography (UHPLC) coupled with a new generation high resolution quadrupole time-of-flight mass spectrometer (QTOF-MS) to investigate the steroid metabolism after transdermal and oral T administration. An approach was developed to quantify 12 targeted urinary steroids as direct glucuro- and sulfo-conjugated metabolites, allowing the conservation of the phase II metabolism information, reflecting genetic and environmental influences. The UHPLC-QTOF-MS(E) platform was applied to clinical study samples from 19 healthy male volunteers, having different genotypes for the UGT2B17 enzyme responsible for the glucuroconjugation of T. Based on reference population ranges, none of the traditional markers of T misuse could detect doping after topical administration of T, while the detection window was short after oral TU ingestion. The detection ability of the 12 targeted steroids was thus evaluated by using individual thresholds following both transdermal and oral administration. Other relevant biomarkers and minor metabolites were studied for complementary information to the steroid profile, including sulfoconjugated analytes and hydroxy forms of glucuroconjugated metabolites. While sulfoconjugated steroids may provide helpful screening information for individuals with homozygotous UGT2B17 deletion, hydroxy-glucuroconjugated analytes could enhance the detection window of oral T undecanoate (TU) doping.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Allocentric spatial memory, the memory for locations coded in relation to objects comprising our environment, is a fundamental component of episodic memory and is dependent on the integrity of the hippocampal formation in adulthood. Previous research from different laboratories reported that basic allocentric spatial memory abilities are reliably observed in children after 2 years of age. Based on work performed in monkeys and rats, we had proposed that the functional maturation of direct entorhinal cortex projections to the CA1 field of the hippocampus might underlie the emergence of basic allocentric spatial memory. We also proposed that the protracted development of the dentate gyrus and its projections to the CA3 field of the hippocampus might underlie the development of high-resolution allocentric spatial memory capacities, based on the essential contribution of these structures to the process known as pattern separation. Here, we present an experiment designed to assess the development of spatial pattern separation capacities and its impact on allocentric spatial memory performance in children from 18 to 48 months of age. We found that: (1) allocentric spatial memory performance improved with age, (2) as compared to younger children, a greater number of children older than 36 months advanced to the final stage requiring the highest degree of spatial resolution, and (3) children that failed at different stages exhibited difficulties in discriminating locations that required higher spatial resolution abilities. These results are consistent with the hypothesis that improvements in human spatial memory performance might be linked to improvements in pattern separation capacities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To assess the prevalence of PRPH2 in autosomal dominant retinitis pigmentosa (adRP), to report 6 novel mutations, to characterize the biochemical features of a recurrent novel mutation, and to study the clinical features of adRP patients. DESIGN: Retrospective clinical and molecular genetic study. METHODS: Clinical investigations included visual field testing, fundus examination, high-resolution spectral-domain optical coherence tomography (OCT), fundus autofluorescence imaging, and electroretinogram (ERG) recording. PRPH2 was screened by Sanger sequencing in a cohort of 310 French families with adRP. Peripherin-2 protein was produced in yeast and analyzed by Western blot. RESULTS: We identified 15 mutations, including 6 novel and 9 previously reported changes in 32 families, accounting for a prevalence of 10.3% in this adRP population. We showed that a new recurrent p.Leu254Gln mutation leads to protein aggregation, suggesting abnormal folding. The clinical severity of the disease in examined patients was moderate with 78% of the eyes having 1-0.5 of visual acuity and 52% of the eyes retaining more than 50% of the visual field. Some patients characteristically showed vitelliform deposits or macular involvement. In some families, pericentral RP or macular dystrophy were found in family members while widespread RP was present in other members of the same families. CONCLUSIONS: The mutations in PRPH2 account for 10.3% of adRP in the French population, which is higher than previously reported (0%-8%) This makes PRPH2 the second most frequent adRP gene after RHO in our series. PRPH2 mutations cause highly variable phenotypes and moderate forms of adRP, including mild cases, which could be underdiagnosed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this doctoral thesis, a tomographic STED microscopy technique for 3D super-resolution imaging was developed and utilized to observebone remodeling processes. To improve upon existing methods, wehave used a tomographic approach using a commercially available stimulated emission depletion (STED) microscope. A certain region of interest (ROI) was observed at two oblique angles: one at a standard inverted configuration from below (bottom view) and another from the side (side view) via a micro-mirror positioned close to the ROI. The two viewing angles were reconstructed into a final tomogram. The technique, named as tomographic STED microscopy, was able to achieve an axial resolution of approximately 70 nm on microtubule structures in a fixed biological specimen. High resolution imaging of osteoclasts (OCs) that are actively resorbing bone was achieved by creating an optically transparent coating on a microscope coverglass that imitates a fractured bone surface. 2D super-resolution STED microscopy on the bone layer showed approximately 60 nm of lateral resolution on a resorption associated organelle allowing these structures to be imaged with super-resolution microscopy for the first time. The developed tomographic STED microscopy technique was further applied to study resorption mechanisms of OCs cultured on the bone coating. The technique revealed actin cytoskeleton with specific structures, comet-tails, some of which were facing upwards and some others were facing downwards. This, in our opinion, indicated that during bone resorption, an involvement of the actin cytoskeleton in vesicular exocytosis and endocytosis is present. The application of tomographic STED microscopy in bone biology demonstrated that 3D super-resolution techniques can provide new insights into biological 3D nano-structures that are beyond the diffraction-limit when the optical constraints of super-resolution imaging are carefully taken into account.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La compréhension de processus biologiques complexes requiert des approches expérimentales et informatiques sophistiquées. Les récents progrès dans le domaine des stratégies génomiques fonctionnelles mettent dorénavant à notre disposition de puissants outils de collecte de données sur l’interconnectivité des gènes, des protéines et des petites molécules, dans le but d’étudier les principes organisationnels de leurs réseaux cellulaires. L’intégration de ces connaissances au sein d’un cadre de référence en biologie systémique permettrait la prédiction de nouvelles fonctions de gènes qui demeurent non caractérisées à ce jour. Afin de réaliser de telles prédictions à l’échelle génomique chez la levure Saccharomyces cerevisiae, nous avons développé une stratégie innovatrice qui combine le criblage interactomique à haut débit des interactions protéines-protéines, la prédiction de la fonction des gènes in silico ainsi que la validation de ces prédictions avec la lipidomique à haut débit. D’abord, nous avons exécuté un dépistage à grande échelle des interactions protéines-protéines à l’aide de la complémentation de fragments protéiques. Cette méthode a permis de déceler des interactions in vivo entre les protéines exprimées par leurs promoteurs naturels. De plus, aucun biais lié aux interactions des membranes n’a pu être mis en évidence avec cette méthode, comparativement aux autres techniques existantes qui décèlent les interactions protéines-protéines. Conséquemment, nous avons découvert plusieurs nouvelles interactions et nous avons augmenté la couverture d’un interactome d’homéostasie lipidique dont la compréhension demeure encore incomplète à ce jour. Par la suite, nous avons appliqué un algorithme d’apprentissage afin d’identifier huit gènes non caractérisés ayant un rôle potentiel dans le métabolisme des lipides. Finalement, nous avons étudié si ces gènes et un groupe de régulateurs transcriptionnels distincts, non préalablement impliqués avec les lipides, avaient un rôle dans l’homéostasie des lipides. Dans ce but, nous avons analysé les lipidomes des délétions mutantes de gènes sélectionnés. Afin d’examiner une grande quantité de souches, nous avons développé une plateforme à haut débit pour le criblage lipidomique à contenu élevé des bibliothèques de levures mutantes. Cette plateforme consiste en la spectrométrie de masse à haute resolution Orbitrap et en un cadre de traitement des données dédié et supportant le phénotypage des lipides de centaines de mutations de Saccharomyces cerevisiae. Les méthodes expérimentales en lipidomiques ont confirmé les prédictions fonctionnelles en démontrant certaines différences au sein des phénotypes métaboliques lipidiques des délétions mutantes ayant une absence des gènes YBR141C et YJR015W, connus pour leur implication dans le métabolisme des lipides. Une altération du phénotype lipidique a également été observé pour une délétion mutante du facteur de transcription KAR4 qui n’avait pas été auparavant lié au métabolisme lipidique. Tous ces résultats démontrent qu’un processus qui intègre l’acquisition de nouvelles interactions moléculaires, la prédiction informatique des fonctions des gènes et une plateforme lipidomique innovatrice à haut débit , constitue un ajout important aux méthodologies existantes en biologie systémique. Les développements en méthodologies génomiques fonctionnelles et en technologies lipidomiques fournissent donc de nouveaux moyens pour étudier les réseaux biologiques des eucaryotes supérieurs, incluant les mammifères. Par conséquent, le stratégie présenté ici détient un potentiel d’application au sein d’organismes plus complexes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An improved color video super-resolution technique using kernel regression and fuzzy enhancement is presented in this paper. A high resolution frame is computed from a set of low resolution video frames by kernel regression using an adaptive Gaussian kernel. A fuzzy smoothing filter is proposed to enhance the regression output. The proposed technique is a low cost software solution to resolution enhancement of color video in multimedia applications. The performance of the proposed technique is evaluated using several color videos and it is found to be better than other techniques in producing high quality high resolution color videos

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a new directionally adaptive, learning based, single image super resolution method using multiple direction wavelet transform, called Directionlets is presented. This method uses directionlets to effectively capture directional features and to extract edge information along different directions of a set of available high resolution images .This information is used as the training set for super resolving a low resolution input image and the Directionlet coefficients at finer scales of its high-resolution image are learned locally from this training set and the inverse Directionlet transform recovers the super-resolved high resolution image. The simulation results showed that the proposed approach outperforms standard interpolation techniques like Cubic spline interpolation as well as standard Wavelet-based learning, both visually and in terms of the mean squared error (mse) values. This method gives good result with aliased images also.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High density, uniform GaN nanodot arrays with controllable size have been synthesized by using template-assisted selective growth. The GaN nanodots with average diameter 40nm, 80nm and 120nm were selectively grown by metalorganic chemical vapor deposition (MOCVD) on a nano-patterned SiO2/GaN template. The nanoporous SiO2 on GaN surface was created by inductively coupled plasma etching (ICP) using anodic aluminum oxide (AAO) template as a mask. This selective regrowth results in highly crystalline GaN nanodots confirmed by high resolution transmission electron microscopy. The narrow size distribution and uniform spatial position of the nanoscale dots offer potential advantages over self-assembled dots grown by the Stranski–Krastanow mode.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the 3-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortex to their possible extra-tropical transition. Results have been compared with reanalyses (ERA40 and JRA25) and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to ENSO with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the center over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific SST anomalies. The overestimation in 2 the North Indian Ocean is likely to be due to an over prediction in the intensity of monsoon depressions, which are then classified as intense tropical storms. Nevertheless, overall results are encouraging and will further contribute to increased confidence in simulating intense tropical storms with high-resolution climate models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A high resolution regional atmosphere model is used to investigate the sensitivity of the North Atlantic storm track to the spatial and temporal resolution of the sea surface temperature (SST) data used as a lower boundary condition. The model is run over an unusually large domain covering all of the North Atlantic and Europe, and is shown to produce a very good simulation of the observed storm track structure. The model is forced at the lateral boundaries with 15–20 years of data from the ERA-40 reanalysis, and at the lower boundary by SST data of differing resolution. The impacts of increasing spatial and temporal resolution are assessed separately, and in both cases increasing the resolution leads to subtle, but significant changes in the storm track. In some, but not all cases these changes act to reduce the small storm track biases seen in the model when it is forced with low-resolution SSTs. In addition there are several clear mesoscale responses to increased spatial SST resolution, with surface heat fluxes and convective precipitation increasing by 10–20% along the Gulf Stream SST gradient.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The intraseasonal variability (ISV) of the Indian summer monsoon is dominated by a 30–50 day oscillation between “active” and “break” events of enhanced and reduced rainfall over the subcontinent, respectively. These organized convective events form in the equatorial Indian Ocean and propagate north to India. Atmosphere–ocean coupled processes are thought to play a key role the intensity and propagation of these events. A high-resolution, coupled atmosphere–mixed-layer-oceanmodel is assembled: HadKPP. HadKPP comprises the Hadley Centre Atmospheric Model (HadAM3) and the K Profile Parameterization (KPP) mixed-layer ocean model. Following studies that upper-ocean vertical resolution and sub-diurnal coupling frequencies improve the simulation of ISV in SSTs, KPP is run at 1 m vertical resolution near the surface; the atmosphere and ocean are coupled every three hours. HadKPP accurately simulates the 30–50 day ISV in rainfall and SSTs over India and the Bay of Bengal, respectively, but suffers from low ISV on the equator. This is due to the HadAM3 convection scheme producing limited ISV in surface fluxes. HadKPP demonstrates little of the observed northward propagation of intraseasonal events, producing instead a standing oscillation. The lack of equatorial ISV in convection in HadAM3 constrains the ability of KPP to produce equatorial SST anomalies, which further weakens the ISV of convection. It is concluded that while atmosphere–ocean interactions are undoubtedly essential to an accurate simulation of ISV, they are not a panacea for model deficiencies. In regions where the atmospheric forcing is adequate, such as the Bay of Bengal, KPP produces SST anomalies that are comparable to the Tropical Rainfall Measuring Mission Microwave Imager (TMI) SST analyses in both their magnitude and their timing with respect to rainfall anomalies over India. HadKPP also displays a much-improved phase relationship between rainfall and SSTs over a HadAM3 ensemble forced by observed SSTs, when both are compared to observations. Coupling to mixed-layer models such as KPP has the potential to improve operational predictions of ISV, particularly when the persistence time of SST anomalies is shorter than the forecast lead time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Projections of future global sea level depend on reliable estimates of changes in the size of polar ice sheets. Calculating this directly from global general circulation models (GCMs) is unreliable because the coarse resolution of 100 km or more is unable to capture narrow ablation zones, and ice dynamics is not usually taken into account in GCMs. To overcome these problems a high-resolution (20 km) dynamic ice sheet model has been coupled to the third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). A novel feature is the use of two-way coupling, so that climate changes in the GCM drive ice mass changes in the ice sheet model that, in turn, can alter the future climate through changes in orography, surface albedo, and freshwater input to the model ocean. At the start of the main experiment the atmospheric carbon dioxide concentration was increased to 4 times the preindustrial level and held constant for 3000 yr. By the end of this period the Greenland ice sheet is almost completely ablated and has made a direct contribution of approximately 7 m to global average sea level, causing a peak rate of sea level rise of 5 mm yr-1 early in the simulation. The effect of ice sheet depletion on global and regional climate has been examined and it was found that apart from the sea level rise, the long-term effect on global climate is small. However, there are some significant regional climate changes that appear to have reduced the rate at which the ice sheet ablates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The High Resolution Dynamics Limb Sounder is described, with particular reference to the atmospheric measurements to be made and the rationale behind the measurement strategy. The demands this strategy places on the filters to be used in the instrument and the designs to which this leads to are described. A second set of filters at an intermediate image plane to reduce "Ghost Imaging" is discussed together with their required spectral properties. A method of combining the spectral characteristics of the primary and secondary filters in each channel are combined together with the spectral response of the detectors and other optical elements to obtain the system spectral response weighted appropriately for the Planck function and atmospheric limb absorption. This method is used to demonstrate whether the out-of-band spectral blocking requirement for a channel is being met and an example calculation is demonstrated showing how the blocking is built up for a representative channel. Finally, the techniques used to produce filters of the necessary sub-millimetre sizes together with the testing methods and procedures used to assess the environmental durability and establish space flight quality are discussed.