930 resultados para Protein Array Analysis -- methods
Resumo:
Working memory (WM) is not a unitary construct. There are distinct processes involved in encoding information, maintaining it on-line, and using it to guide responses. The anatomical configurations of these processes are more accurately analyzed as functionally connected networks than collections of individual regions. In the current study we analyzed event-related functional magnetic resonance imaging (fMRI) data from a Sternberg Item Recognition Paradigm WM task using a multivariate analysis method that allowed the linking of functional networks to temporally-separated WM epochs. The length of the delay epochs was varied to optimize isolation of the hemodynamic response (HDR) for each task epoch. All extracted functional networks displayed statistically significant sensitivity to delay length. Novel information extracted from these networks that was not apparent in the univariate analysis of these data included involvement of the hippocampus in encoding/probe, and decreases in BOLD signal in the superior temporal gyrus (STG), along with default-mode regions, during encoding/delay. The bilateral hippocampal activity during encoding/delay fits with theoretical models of WM in which memoranda held across the short term are activated long-term memory representations. The BOLD signal decreases in the STG were unexpected, and may reflect repetition suppression effects invoked by internal repetition of letter stimuli. Thus, analysis methods focusing on how network dynamics relate to experimental conditions allowed extraction of novel information not apparent in univariate analyses, and are particularly recommended for WM experiments for which task epochs cannot be randomized.
Resumo:
Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. Methods: We examined analysis of basic acoustic stimuli in Wernicke’s aphasia participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in “moving ripple” stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Results: Participants with Wernicke’s aphasia showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both frequency and dynamic modulation detection correlated significantly with auditory comprehension abilities in the Wernicke’s aphasia participants. Conclusion: These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectrotemporal nonverbal stimuli in Wernicke’s aphasia, which may have a causal contribution to the auditory language comprehension impairment Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Resumo:
A new tri-functional ligand (Bu2NCOCH2SO2CH2CONBu2)-Bu-i-Bu-i (L) was prepared and characterized. The coordination chemistry of this ligand with uranyl nitrate was studied with IR, (HNMR)-H-1, ES-MS, TG and elemental analysis methods. The structure of the compound [UO2(NO3)(2)L] was determined by single crystal X-ray diffraction techniques. In the structure the uranium(VI) ion is surrounded by eight oxygen atoms in a hexagonal bi-pyramidal geometry. Four oxygen atoms from two nitrate groups and two oxygen atoms from the ligand form a planar hexagon. The ligand acts as a bidentate chelate and bonds through both the carbamoyl groups to the uranyl nitrate. An ES-MS spectrum shows that the complex retains the bonding in solution. The compound displayed vibronically coupled fluorescence emission.
Resumo:
Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
We use combinations of geomagnetic indices, based on both variation range and hourly means, to derive the solar wind flow speed, the interplanetary magnetic field strength at 1 AU and the total open solar flux between 1895 and the present. We analyze the effects of the regression procedure and geomagnetic indices used by adopting four analysis methods. These give a mean interplanetary magnetic field strength increase of 45.1 ± 4.5% between 1903 and 1956, associated with a 14.4 ± 0.7% rise in the solar wind speed. We use averaging timescales of 1 and 2 days to allow for the difference between the magnetic fluxes threading the coronal source surface and the heliocentric sphere at 1 AU. The largest uncertainties originate from the choice of regression procedure: the average of all eight estimates of the rise in open solar flux is 73.0 ± 5.0%, but the best procedure, giving the narrowest and most symmetric distribution of fit residuals, yields 87.3 ± 3.9%.
Resumo:
When considering adaptation measures and global climate mitigation goals, stakeholders need regional-scale climate projections, including the range of plausible warming rates. To assist these stakeholders, it is important to understand whether some locations may see disproportionately high or low warming from additional forcing above targets such as 2 K (ref. 1). There is a need to narrow uncertainty2 in this nonlinear warming, which requires understanding how climate changes as forcings increase from medium to high levels. However, quantifying and understanding regional nonlinear processes is challenging. Here we show that regional-scale warming can be strongly superlinear to successive CO2 doublings, using five different climate models. Ensemble-mean warming is superlinear over most land locations. Further, the inter-model spread tends to be amplified at higher forcing levels, as nonlinearities grow—especially when considering changes per kelvin of global warming. Regional nonlinearities in surface warming arise from nonlinearities in global-mean radiative balance, the Atlantic meridional overturning circulation, surface snow/ice cover and evapotranspiration. For robust adaptation and mitigation advice, therefore, potentially avoidable climate change (the difference between business-as-usual and mitigation scenarios) and unavoidable climate change (change under strong mitigation scenarios) may need different analysis methods.
Resumo:
Empirical Mode Decomposition is presented as an alternative to traditional analysis methods to decompose geomagnetic time series into spectral components. Important comments on the algorithm and its variations will be given. Using this technique, planetary wave modes of 5-, 10-, and 16-day mean periods can be extracted from magnetic field components of three different stations in Germany. In a second step, the amplitude modulation functions of these wave modes can be shown to contain significant contribution from solar cycle variation through correlation with smoothed sunspot numbers. Additionally, the data indicate connections with geomagnetic jerk occurrences, supported by a second set of data providing reconstructed near-Earth magnetic field for 150 years. Usually attributed to internal dynamo processes within the Earth's outer core, the question of who is impacting whom will be briefly discussed here.
Resumo:
Uncertainty in ocean analysis methods and deficiencies in the observing system are major obstacles for the reliable reconstruction of the past ocean climate. The variety of existing ocean reanalyses is exploited in a multi-reanalysis ensemble to improve the ocean state estimation and to gauge uncertainty levels. The ensemble-based analysis of signal-to-noise ratio allows the identification of ocean characteristics for which the estimation is robust (such as tropical mixed-layer-depth, upper ocean heat content), and where large uncertainty exists (deep ocean, Southern Ocean, sea ice thickness, salinity), providing guidance for future enhancement of the observing and data assimilation systems.
Resumo:
The dynamical processes that lead to open cluster disruption cause its mass to decrease. To investigate such processes from the observational point of view, it is important to identify open cluster remnants (OCRs), which are intrinsically poorly populated. Due to their nature, distinguishing them from field star fluctuations is still an unresolved issue. In this work, we developed a statistical diagnostic tool to distinguish poorly populated star concentrations from background field fluctuations. We use 2MASS photometry to explore one of the conditions required for a stellar group to be a physical group: to produce distinct sequences in a colour-magnitude diagram (CMD). We use automated tools to (i) derive the limiting radius; (ii) decontaminate the field and assign membership probabilities; (iii) fit isochrones; and (iv) compare object and field CMDs, considering the isochrone solution, in order to verify the similarity. If the object cannot be statistically considered as a field fluctuation, we derive its probable age, distance modulus, reddening and uncertainties in a self-consistent way. As a test, we apply the tool to open clusters and comparison fields. Finally, we study the OCR candidates DoDz 6, NGC 272, ESO 435 SC48 and ESO 325 SC15. The tool is optimized to treat these low-statistic objects and to separate the best OCR candidates for studies on kinematics and chemical composition. The study of the possible OCRs will certainly provide a deep understanding of OCR properties and constraints for theoretical models, including insights into the evolution of open clusters and dissolution rates.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.
Resumo:
Pluripotent human embryonic stem (hES) cells are an important experimental tool for basic and applied research, and a potential source of different tissues for transplantation. However, one important challenge for the clinical use of these cells is the issue of immunocompatibility, which may be dealt with by the establishment of hES cell banks to attend different populations. Here we describe the derivation and characterization of a line of hES cells from the Brazilian population, named BR-I, in commercial defined medium. In contrast to the other hES cell lines established in defined medium, BR-I maintained a stable normal karyotype as determined by genomic array analysis after 6 months in continuous culture (passage 29). To our knowledge, this is the first reported line of hES cells derived in South America. We have determined its genomic ancestry and compared the HLA-profile of BR-I and another 22 hES cell lines established elsewhere with those of the Brazilian population, finding they would match only 0.011% of those individuals. Our results highlight the challenges involved in hES cell banking for populations with a high degree of ethnic admixture.
Resumo:
Introduction: The characterization of microbial communities infecting the endodontic system in each clinical condition may help on the establishment of a correct prognosis and distinct strategies of treatment. The purpose of this study was to determine the bacterial diversity in primary endodontic infections by 16S ribosomal-RNA (rRNA) sequence analysis. Methods: Samples from root canals of untreated asymptomatic teeth (n = 12) exhibiting periapical lesions were obtained, 165 rRNA bacterial genomic libraries were constructed and sequenced, and bacterial diversity was estimated. Results: A total of 489 clones were analyzed (mean, 40.7 +/- 8.0 clones per sample). Seventy phylotypes were identified of which six were novel phylotypes belonging to the family Ruminococcaceae. The mean number of taxa per canal was 10.0, ranging from 3 to 21 per sample; 65.7% of the cloned sequences represented phylotypes for which no cultivated isolates have been reported. The most prevalent taxa were Atopobium rimae (50.0%), Dialister invisus, Pre-votella oris, Pseudoramibacter alactolyticus, and Tannerella forsythia (33.3%). Conclusions: Although several key species predominate in endodontic samples of asymptomatic cases with periapical lesions, the primary endodontic infection is characterized by a wide bacterial diversity, which is mostly represented by members of the phylum Firmicutes belonging to the class Clostridia followed by the phylum Bacteroidetes. (J Ended 2011;37:922-926)