952 resultados para Dynamic data set visualization
Resumo:
Functional magnetic resonance imaging (fMRI) studies can provide insight into the neural correlates of hallucinations. Commonly, such studies require self-reports about the timing of the hallucination events. While many studies have found activity in higher-order sensory cortical areas, only a few have demonstrated activity of the primary auditory cortex during auditory verbal hallucinations. In this case, using self-reports as a model of brain activity may not be sensitive enough to capture all neurophysiological signals related to hallucinations. We used spatial independent component analysis (sICA) to extract the activity patterns associated with auditory verbal hallucinations in six schizophrenia patients. SICA decomposes the functional data set into a set of spatial maps without the use of any input function. The resulting activity patterns from auditory and sensorimotor components were further analyzed in a single-subject fashion using a visualization tool that allows for easy inspection of the variability of regional brain responses. We found bilateral auditory cortex activity, including Heschl's gyrus, during hallucinations of one patient, and unilateral auditory cortex activity in two more patients. The associated time courses showed a large variability in the shape, amplitude, and time of onset relative to the self-reports. However, the average of the time courses during hallucinations showed a clear association with this clinical phenomenon. We suggest that detection of this activity may be facilitated by examining hallucination epochs of sufficient length, in combination with a data-driven approach.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
Multi-input multi-output (MIMO) technology is an emerging solution for high data rate wireless communications. We develop soft-decision based equalization techniques for frequency selective MIMO channels in the quest for low-complexity equalizers with BER performance competitive to that of ML sequence detection. We first propose soft decision equalization (SDE), and demonstrate that decision feedback equalization (DFE) based on soft-decisions, expressed via the posterior probabilities associated with feedback symbols, is able to outperform hard-decision DFE, with a low computational cost that is polynomial in the number of symbols to be recovered, and linear in the signal constellation size. Building upon the probabilistic data association (PDA) multiuser detector, we present two new MIMO equalization solutions to handle the distinctive channel memory. With their low complexity, simple implementations, and impressive near-optimum performance offered by iterative soft-decision processing, the proposed SDE methods are attractive candidates to deliver efficient reception solutions to practical high-capacity MIMO systems. Motivated by the need for low-complexity receiver processing, we further present an alternative low-complexity soft-decision equalization approach for frequency selective MIMO communication systems. With the help of iterative processing, two detection and estimation schemes based on second-order statistics are harmoniously put together to yield a two-part receiver structure: local multiuser detection (MUD) using soft-decision Probabilistic Data Association (PDA) detection, and dynamic noise-interference tracking using Kalman filtering. The proposed Kalman-PDA detector performs local MUD within a sub-block of the received data instead of over the entire data set, to reduce the computational load. At the same time, all the inter-ference affecting the local sub-block, including both multiple access and inter-symbol interference, is properly modeled as the state vector of a linear system, and dynamically tracked by Kalman filtering. Two types of Kalman filters are designed, both of which are able to track an finite impulse response (FIR) MIMO channel of any memory length. The overall algorithms enjoy low complexity that is only polynomial in the number of information-bearing bits to be detected, regardless of the data block size. Furthermore, we introduce two optional performance-enhancing techniques: cross- layer automatic repeat request (ARQ) for uncoded systems and code-aided method for coded systems. We take Kalman-PDA as an example, and show via simulations that both techniques can render error performance that is better than Kalman-PDA alone and competitive to sphere decoding. At last, we consider the case that channel state information (CSI) is not perfectly known to the receiver, and present an iterative channel estimation algorithm. Simulations show that the performance of SDE with channel estimation approaches that of SDE with perfect CSI.
Resumo:
The Michigan Basin is located in the upper Midwest region of the United States and is centered geographically over the Lower Peninsula of Michigan. It is filled primarily with Paleozoic carbonates and clastics, overlying Precambrian basement rocks and covered by Pleistocene glacial drift. In Michigan, more than 46,000 wells have been drilled in the basin, many producing significant quantities of oil and gas since the 1920s in addition to providing a wealth of data for subsurface visualization. Well log tomography, formerly log-curve amplitude slicing, is a visualization method recently developed at Michigan Technological University to correlate subsurface data by utilizing the high vertical resolution of well log curves. The well log tomography method was first successfully applied to the Middle Devonian Traverse Group within the Michigan Basin using gamma ray log curves. The purpose of this study is to prepare a digital data set for the Middle Devonian Dundee and Rogers City Limestones, apply the well log tomography method to this data and from this application, interpret paleogeographic trends in the natural radioactivity. Both the Dundee and Rogers City intervals directly underlie the Traverse Group and combined are the most prolific reservoir within the Michigan Basin. Differences between this study and the Traverse Group include increased well control and “slicing” of a more uniform lithology. Gamma ray log curves for the Dundee and Rogers City Limestones were obtained from 295 vertical wells distributed over the Lower Peninsula of Michigan, converted to Log ASCII Standard files, and input into the well log tomography program. The “slicing” contour results indicate that during the formation of the Dundee and Rogers City intervals, carbonates and evaporites with low natural radioactive signatures on gamma ray logs were deposited. This contrasts the higher gamma ray amplitudes from siliciclastic deltas that cyclically entered the basin during Traverse Group deposition. Additionally, a subtle north-south, low natural radioactive trend in the center of the basin may correlate with previously published Dundee facies tracts. Prominent trends associated with the distribution of limestone and dolomite are not observed because the regional range of gamma ray values for both carbonates are equivalent in the Michigan Basin and additional log curves are needed to separate these lithologies.
DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS
Resumo:
Principal Component Analysis (PCA) is a popular method for dimension reduction that can be used in many fields including data compression, image processing, exploratory data analysis, etc. However, traditional PCA method has several drawbacks, since the traditional PCA method is not efficient for dealing with high dimensional data and cannot be effectively applied to compute accurate enough principal components when handling relatively large portion of missing data. In this report, we propose to use EM-PCA method for dimension reduction of power system measurement with missing data, and provide a comparative study of traditional PCA and EM-PCA methods. Our extensive experimental results show that EM-PCA method is more effective and more accurate for dimension reduction of power system measurement data than traditional PCA method when dealing with large portion of missing data set.
Resumo:
Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.
Resumo:
Here we present a study of the 11 yr sunspot cycle's imprint on the Northern Hemisphere atmospheric circulation, using three recently developed gridded upper-air data sets that extend back to the early twentieth century. We find a robust response of the tropospheric late-wintertime circulation to the sunspot cycle, independent from the data set. This response is particularly significant over Europe, although results show that it is not directly related to a North Atlantic Oscillation (NAO) modulation; instead, it reveals a significant connection to the more meridional Eurasian pattern (EU). The magnitude of mean seasonal temperature changes over the European land areas locally exceeds 1 K in the lower troposphere over a sunspot cycle. We also analyse surface data to address the question whether the solar signal over Europe is temporally stable for a longer 250 yr period. The results increase our confidence in the existence of an influence of the 11 yr cycle on the European climate, but the signal is much weaker in the first half of the period compared to the second half. The last solar minimum (2005 to 2010), which was not included in our analysis, shows anomalies that are consistent with our statistical results for earlier solar minima.
Resumo:
Persons with Down syndrome (DS) uniquely have an increased frequency of leukemias but a decreased total frequency of solid tumors. The distribution and frequency of specific types of brain tumors have never been studied in DS. We evaluated the frequency of primary neural cell embryonal tumors and gliomas in a large international data set. The observed number of children with DS having a medulloblastoma, central nervous system primitive neuroectodermal tumor (CNS-PNET) or glial tumor was compared to the expected number. Data were collected from cancer registries or brain tumor registries in 13 countries of Europe, America, Asia and Oceania. The number of DS children with each category of tumor was treated as a Poisson variable with mean equal to 0.000884 times the total number of registrations in that category. Among 8,043 neural cell embryonal tumors (6,882 medulloblastomas and 1,161 CNS-PNETs), only one patient with medulloblastoma had DS, while 7.11 children in total and 6.08 with medulloblastoma were expected to have DS. (p 0.016 and 0.0066 respectively). Among 13,797 children with glioma, 10 had DS, whereas 12.2 were expected. Children with DS appear to be specifically protected against primary neural cell embryonal tumors of the CNS, whereas gliomas occur at the same frequency as in the general population. A similar protection against neuroblastoma, the principal extracranial neural cell embryonal tumor, has been observed in children with DS. Additional genetic material on the supernumerary chromosome 21 may protect against embryonal neural cell tumor development.
Resumo:
Historical, i.e. pre-1957, upper-air data are a valuable source of information on the state of the atmosphere, in some parts of the world dating back to the early 20th century. However, to date, reanalyses have only partially made use of these data, and only of observations made after 1948. Even for the period between 1948 (the starting year of the NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis) and the International Geophysical Year in 1957 (the starting year of the ERA-40 reanalysis), when the global upper-air coverage reached more or less its current status, many observations have not yet been digitised. The Comprehensive Historical Upper-Air Network (CHUAN) already compiled a large collection of pre-1957 upper-air data. In the framework of the European project ERA-CLIM (European Reanalysis of Global Climate Observations), significant amounts of additional upper-air data have been catalogued (> 1.3 million station days), imaged (> 200 000 images) and digitised (> 700 000 station days) in order to prepare a new input data set for upcoming reanalyses. The records cover large parts of the globe, focussing on, so far, less well covered regions such as the tropics, the polar regions and the oceans, and on very early upper-air data from Europe and the US. The total number of digitised/inventoried records is 61/101 for moving upper-air data, i.e. data from ships, etc., and 735/1783 for fixed upper-air stations. Here, we give a detailed description of the resulting data set including the metadata and the quality checking procedures applied. The data will be included in the next version of CHUAN. The data are available at doi:10.1594/PANGAEA.821222
Resumo:
Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.
Resumo:
The radar reflectivity of an ice-sheet bed is a primary measurement for discriminating between thawed and frozen beds. Uncertainty in englacial radar attenuation and its spatial variation introduces corresponding uncertainty in estimates of basal reflectivity. Radar attenuation is proportional to ice conductivity, which depends on the concentrations of acid and sea-salt chloride and the temperature of the ice. We synthesize published conductivity measurements to specify an ice-conductivity model and find that some of the dielectric properties of ice at radar frequencies are not yet well constrained. Using depth profiles of ice-core chemistry and borehole temperature and an average of the experimental values for the dielectric properties, we calculate an attenuation rate profile for Siple Dome, West Antarctica. The depth-averaged modeled attenuation rate at Siple Dome (20.0 +/- 5.7 dB km(-1)) is somewhat lower than the value derived from radar profiles (25.3 +/- 1.1 dB km(-1)). Pending more experimental data on the dielectric properties of ice, we can match the modeled and radar-derived attenuation rates by an adjustment to the value for the pure ice conductivity that is within the range of reported values. Alternatively, using the pure ice dielectric properties derived from the most extensive single data set, the modeled depth-averaged attenuation rate is 24.0 +/- 2.2 dB km(-1). This work shows how to calculate englacial radar attenuation using ice chemistry and temperature data and establishes a basis for mapping spatial variations in radar attenuation across an ice sheet.
Resumo:
Identifying drivers of species diversity is a major challenge in understanding and predicting the dynamics of species-rich semi-natural grasslands. In particular in temperate grasslands changes in land use and its consequences, i.e. increasing fragmentation, the on-going loss of habitat and the declining importance of regional processes such as seed dispersal by livestock, are considered key drivers of the diversity loss witnessed within the last decades. It is a largely unresolved question to what degree current temperate grassland communities already reflect a decline of regional processes such as longer distance seed dispersal. Answering this question is challenging since it requires both a mechanistic approach to community dynamics and a sufficient data basis that allows identifying general patterns. Here, we present results of a local individual- and trait-based community model that was initialized with plant functional types (PFTs) derived from an extensive empirical data set of species-rich grasslands within the `Biodiversity Exploratories' in Germany. Driving model processes included above- and belowground competition, dynamic resource allocation to shoots and roots, clonal growth, grazing, and local seed dispersal. To test for the impact of regional processes we also simulated seed input from a regional species pool. Model output, with and without regional seed input, was compared with empirical community response patterns along a grazing gradient. Simulated response patterns of changes in PFT richness, Shannon diversity, and biomass production matched observed grazing response patterns surprisingly well if only local processes were considered. Already low levels of additional regional seed input led to stronger deviations from empirical community pattern. While these findings cannot rule out that regional processes other than those considered in the modeling study potentially play a role in shaping the local grassland communities, our comparison indicates that European grasslands are largely isolated, i.e. local mechanisms explain observed community patterns to a large extent.
Resumo:
Results from the Zurich study have shown lasting associations between sport practice and mental health. The effects are pronounced in people with pre-exising mental health problems. This analysis aims to replicate these results with the large Swiss Household Panel data set and to provide more differentiated results. The analysis covered the interviews 1999-2003 and included 3891 stayers, i.e., participants who were interviewed in all years. The outcome variables are depression / blues / anxiety, weakness / weariness, sleeping problems, energy / optimism. Confounding variables include sex, age, education level, citizenship. The analyses were carried out with mixed models (depression, optimism) and GEE models (weakness, sleep). About 60% of the SHP participants practise weekly or daily an individual or a team sport. A similar proportion enjoys a frequent physical activity (for half an hour minimum) which makes oneself slightly breathless. There are slight age-specific differences but also noteworthy regional differences. Practice of sport is clearly interrelated with self-reported depressive symptoms, optimism and weakness. This applies even though some relevant confounders – sex, educational level and citizenship – were introduced into the model. However, no relevant interaction effects with time could be shown. Moreover, direct interrelations commonly led to better fits than models with lagged variables, thus indicating that delayed effects of sport practice on the self-reported psychological complaints are less important. Model variants resulted for specific subgroups, for example, participants with a high vs. low initial activity level. Lack of sport practice is an interesting marker for serious psychological symptoms and mental disorders. The background of this association may differ in different subgroups, and should stimulate further investigations in this area.
Resumo:
We present a new thermodynamic activity-composition model for di-trioctahedral chlorite in the system FeO–MgO–Al2O3–SiO2–H2O that is based on the Holland–Powell internally consistent thermodynamic data set. The model is formulated in terms of four linearly independent end-members, which are amesite, clinochlore, daphnite and sudoite. These account for the most important crystal-chemical substitutions in chlorite, the Fe–Mg, Tschermak and di-trioctahedral substitution. The ideal part of end-member activities is modeled with a mixing-on-site formalism, and non-ideality is described by a macroscopic symmetric (regular) formalism. The symmetric interaction parameters were calibrated using a set of 271 published chlorite analyses for which robust independent temperature estimates are available. In addition, adjustment of the standard state thermodynamic properties of sudoite was required to accurately reproduce experimental brackets involving sudoite. This new model was tested by calculating representative P–T sections for metasediments at low temperatures (<400 °C), in particular sudoite and chlorite bearing metapelites from Crete. Comparison between the calculated mineral assemblages and field data shows that the new model is able to predict the coexistence of chlorite and sudoite at low metamorphic temperatures. The predicted lower limit of the chloritoid stability field is also in better agreement with petrological observations. For practical applications to metamorphic and hydrothermal environments, two new semi-empirical chlorite geothermometers named Chl(1) and Chl(2) were calibrated based on the chlorite + quartz + water equilibrium (2 clinochlore + 3 sudoite = 4 amesite + 4 H2O + 7 quartz). The Chl(1) thermometer requires knowledge of the (Fe3+/ΣFe) ratio in chlorite and predicts correct temperatures for a range of redox conditions. The Chl(2) geothermometer which assumes that all iron in chlorite is ferrous has been applied to partially recrystallized detrital chlorite from the Zone houillère in the French Western Alps.
Resumo:
Reconstructing past modes of ocean circulation is an essential task in paleoclimatology and paleoceanography. To this end, we combine two sedimentary proxies, Nd isotopes (εNd) and the 231Pa/230Th ratio, both of which are not directly involved in the global carbon cycle, but allow the reconstruction of water mass provenance and provide information about the past strength of overturning circulation, respectively. In this study, combined 231Pa/230Th and εNd down-core profiles from six Atlantic Ocean sediment cores are presented. The data set is complemented by the two available combined data sets from the literature. From this we derive a comprehensive picture of spatial and temporal patterns and the dynamic changes of the Atlantic Meridional Overturning Circulation over the past ∼25 ka. Our results provide evidence for a consistent pattern of glacial/stadial advances of Southern Sourced Water along with a northward circulation mode for all cores in the deeper (>3000 m) Atlantic. Results from shallower core sites support an active overturning cell of shoaled Northern Sourced Water during the LGM and the subsequent deglaciation. Furthermore, we report evidence for a short-lived period of intensified AMOC in the early Holocene.