948 resultados para Model Output Statistics
Resumo:
Topological measures of large-scale complex networks are applied to a specific artificial regulatory network model created through a whole genome duplication and divergence mechanism. This class of networks share topological features with natural transcriptional regulatory networks. Specifically, these networks display scale-free and small-world topology and possess subgraph distributions similar to those of natural networks. Thus, the topologies inherent in natural networks may be in part due to their method of creation rather than being exclusively shaped by subsequent evolution under selection. The evolvability of the dynamics of these networks is also examined by evolving networks in simulation to obtain three simple types of output dynamics. The networks obtained from this process show a wide variety of topologies and numbers of genes indicating that it is relatively easy to evolve these classes of dynamics in this model. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The low-energy properties of the one-dimensional anyon gas with a delta-function interaction are discussed in the context of its Bethe ansatz solution. It is found that the anyonic statistical parameter and the dynamical coupling constant induce Haldane exclusion statistics interpolating between bosons and fermions. Moreover, the anyonic parameter may trigger statistics beyond Fermi statistics for which the exclusion parameter alpha is greater than one. The Tonks-Girardeau and the weak coupling limits are discussed in detail. The results support the universal role of alpha in the dispersion relations.
Resumo:
The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.
Resumo:
We present AUSLEM (AUStralian Land Erodibility Model), a land erodibility modelling system that utilizes a rule-set of surficial and climatic thresholds applied through a Geographic Information System (GIs) modelling framework to predict landscape susceptibility to wind erosion. AUSLEM is distinctive in that it quantitatively assesses landscape susceptibility to wind erosion at a 5 x 5 km. spatial resolution on a monthly time-step across Australia. The system was implemented for representative wet (1984), dry (1994), and average rainfall (1997) years with corresponding low, high and moderate dust storm day frequencies. Results demonstrate that AUSLEM can identify landscape erodibility, and provide an interpretation of the physical nature and distribution of erodible landscapes in Australia. Further, results offer an assessment of the dynamic tendencies of erodibility in space and time in response to the El Nino Southern Oscillation (ENSO) and seasonal synoptic scale climate variability. A comparative analysis of AUSLEM output with independent national and international wind erosion, atmospheric aerosol and dust event records indicates a high level of model competency. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The recurrence interval statistics for regional seismicity follows a universal distribution function, independent of the tectonic setting or average rate of activity (Corral, 2004). The universal function is a modified gamma distribution with power-law scaling of recurrence intervals shorter than the average rate of activity and exponential decay for larger intervals. We employ the method of Corral (2004) to examine the recurrence statistics of a range of cellular automaton earthquake models. The majority of models has an exponential distribution of recurrence intervals, the same as that of a Poisson process. One model, the Olami-Feder-Christensen automaton, has recurrence statistics consistent with regional seismicity for a certain range of the conservation parameter of that model. For conservation parameters in this range, the event size statistics are also consistent with regional seismicity. Models whose dynamics are dominated by characteristic earthquakes do not appear to display universality of recurrence statistics.
Resumo:
Recent years have witnessed intense research in multiple input multiple output (MIMO) wireless communications systems, which use multiple element antennas (MEA) for signal transmission and reception. In this paper, we have described a novel electromagnetic model to investigate the effect of mutual coupling, inter-element spacing and array geometry on the capacity of MIMO systems. Simulation results have been presented illustrating the application of the proposed model. The presented model concept stems from a hollow waveguide analogue. Using this model other aspects such as richness of scattering environment (spacing and clustering), the effect of hard versus soft scatterers and pin hole effect can be investigated.
Resumo:
In this paper we present an algorithm as the combination of a low level morphological operation and model based Global Circular Shortest Path scheme to explore the segmentation of the Right Ventricle. Traditional morphological operations were employed to obtain the region of interest, and adjust it to generate a mask. The image cropped by the mask is then partitioned into a few overlapping regions. Global Circular Shortest Path algorithm is then applied to extract the contour from each partition. The final step is to re-assemble the partitions to create the whole contour. The technique is deemed quite reliable and robust, as this is illustrated by a very good agreement between the extracted contour and the expert manual drawing output.
Resumo:
Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so on-line learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics and adaptive critics. At each point estimate an auxiliary flat model (exponential family) is built to locally accommodate both the usual statistic (tangent to the model) and an ancillary statistic (normal to the model). The auxiliary model plays a role in determining credit assignment analogous to that played by an adaptive critic in solving temporal problems. The method is illustrated with the Cauchy model and the algorithm is proved to be asymptotically efficient.
Resumo:
We propose a new mathematical model for efficiency analysis, which combines DEA methodology with an old idea-Ratio Analysis. Our model, called DEA-R, treats all possible ratios "output/input" as outputs within the standard DEA model. Although DEA and DEA-R generate different summary measures for efficiency, the two measures are comparable. Our mathematical and empirical comparisons establish the validity of DEA-R model in its own right. The key advantage of DEA-R over DEA is that it allows effective integration of the model with experts' opinions via flexible restrictive conditions on individual "output/input" pairs. © 2007 Springer Science+Business Media, LLC.
Resumo:
Recently, Drǎgulescu and Yakovenko proposed an analytical formula for computing the probability density function of stock log returns, based on the Heston model, which they tested empirically. Their research design inadvertently favourably biased the fit of the data to the Heston model, thus overstating their empirical results. Furthermore, Drǎgulescu and Yakovenko did not perform any goodness-of-fit statistical tests. This study employs a research design that facilitates statistical tests of the goodness-of-fit of the Heston model to empirical returns. Robustness checks are also performed. In brief, the Heston model outperformed the Gaussian model only at high frequencies and even so does not provide a statistically acceptable fit to the data. The Gaussian model performed (marginally) better at medium and low frequencies, at which points the extra parameters of the Heston model have adverse impacts on the test statistics. © 2005 Taylor & Francis Group Ltd.
Resumo:
In data envelopment analysis (DEA), operating units are compared on their outputs relative to their inputs. The identification of an appropriate input-output set is of decisive significance if assessment of the relative performance of the units is not to be biased. This paper reports on a novel approach used for identifying a suitable input-output set for assessing central administrative services at universities. A computer-supported group support system was used with an advisory board to enable the analysts to extract information pertaining to the boundaries of the unit of assessment and the corresponding input-output variables. The approach provides for a more comprehensive and less inhibited discussion of input-output variables to inform the DEA model. © 2005 Operational Research Society Ltd. All rights reserved.
Resumo:
In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.
Resumo:
Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.