4 resultados para RANDOMNESS
em CentAUR: Central Archive University of Reading - UK
Resumo:
BACKGROUND: Resting-state functional magnetic resonance imaging (fMRI) enables investigation of the intrinsic functional organization of the brain. Fractal parameters such as the Hurst exponent, H, describe the complexity of endogenous low-frequency fMRI time series on a continuum from random (H = .5) to ordered (H = 1). Shifts in fractal scaling of physiological time series have been associated with neurological and cardiac conditions. METHODS: Resting-state fMRI time series were recorded in 30 male adults with an autism spectrum condition (ASC) and 33 age- and IQ-matched male volunteers. The Hurst exponent was estimated in the wavelet domain and between-group differences were investigated at global and voxel level and in regions known to be involved in autism. RESULTS: Complex fractal scaling of fMRI time series was found in both groups but globally there was a significant shift to randomness in the ASC (mean H = .758, SD = .045) compared with neurotypical volunteers (mean H = .788, SD = .047). Between-group differences in H, which was always reduced in the ASC group, were seen in most regions previously reported to be involved in autism, including cortical midline structures, medial temporal structures, lateral temporal and parietal structures, insula, amygdala, basal ganglia, thalamus, and inferior frontal gyrus. Severity of autistic symptoms was negatively correlated with H in retrosplenial and right anterior insular cortex. CONCLUSIONS: Autism is associated with a small but significant shift to randomness of endogenous brain oscillations. Complexity measures may provide physiological indicators for autism as they have done for other medical conditions.
Resumo:
A parallel hardware random number generator for use with a VLSI genetic algorithm processing device is proposed. The design uses an systolic array of mixed congruential random number generators. The generators are constantly reseeded with the outputs of the proceeding generators to avoid significant biasing of the randomness of the array which would result in longer times for the algorithm to converge to a solution. 1 Introduction In recent years there has been a growing interest in developing hardware genetic algorithm devices [1, 2, 3]. A genetic algorithm (GA) is a stochastic search and optimization technique which attempts to capture the power of natural selection by evolving a population of candidate solutions by a process of selection and reproduction [4]. In keeping with the evolutionary analogy, the solutions are called chromosomes with each chromosome containing a number of genes. Chromosomes are commonly simple binary strings, the bits being the genes.
Resumo:
We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.
Resumo:
Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.