10 resultados para Space Density
em Aston University Research Archive
Resumo:
Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.
Resumo:
We investigate the dependence of Bayesian error bars on the distribution of data in input space. For generalized linear regression models we derive an upper bound on the error bars which shows that, in the neighbourhood of the data points, the error bars are substantially reduced from their prior values. For regions of high data density we also show that the contribution to the output variance due to the uncertainty in the weights can exhibit an approximate inverse proportionality to the probability density. Empirical results support these conclusions.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.
Resumo:
The paper investigates the relationships between registrations, de-registrations and population density at county level in the UK using VAT data for 20 years over the period 1980–1999. The rationale for this is based on the need to understand how the extent to which, in different parts of the UK, differences in the relationship between birth rates and death rates combine to produce an interpretable pattern in net birth rates. The analysis of the net birth rate shows that a strategy aimed at the net birth rate might, in principle, just as well aim at reducing business failure, rather than raising the birth rate. Indeed this might be more efficient, since it implies that less start-ups are ‘‘wasted’’ as it would avoid the necessity, if targets are to be reached, of encouraging those individuals who are patently unsuited to running their own business into business ownership.
Resumo:
The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.
Resumo:
We investigate the gradual changes of the microstructure of two blends of high-density polyethylene (HDPE) and polyamide 6 (PA6) at opposite composition filled with increasing amounts of an organomodified clay. The filler locates preferentially inside the polyamide phase, bringing about radical alterations in the micron-scale arrangement of the polymer phases. When the host polyamide represents the major constituent, a sudden reduction of the average sizes of the polyethylene droplets was observed upon addition of even low amounts of organoclay. A morphology refinement was also noticed at low filler contents when the particles distributes inside the minor phase. In this case, however, keep increasing the organoclay content eventually results in a high degree of PA6 phase continuity. Rheological analyses reveal that the filler loading at which the polyamide assembles in a continuous network corresponds to the critical threshold for its rheological transition from a liquid- to a gel-like behaviour, which is indicative of the structuring of the filler inside the host PA6. On the basis of this finding, a schematic mechanism is proposed in which the role of the filler in driving the space arrangement of the polymer phases is discussed. Finally, we show that the synergism between the reinforcing action of the filler and its ability to affect the blend microstructure can be exploited in order to enhance relevant technological properties of the materials, such as their high temperature structural integrity.
Resumo:
The load-bearing biomechanical role of the intervertebral disc is governed by the composition and organization of its major macromolecular components, collagen and aggrecan. The major function of aggrecan is to maintain tissue hydration, and hence disc height, under the high loads imposed by muscle activity and body weight. Key to this role is the high negative fixed charge of its glycosaminoglycan side chains, which impart a high osmotic pressure to the tissue, thus regulating and maintaining tissue hydration and hence disc height under load. In degenerate discs, aggrecan degrades and is lost from the disc, particularly centrally from the nucleus pulposus. This loss of fixed charge results in reduced hydration and loss of disc height; such changes are closely associated with low back pain. The present authors developed biomimetic glycosaminoglycan analogues based on sulphonate-containing polymers. These biomimetics are deliverable via injection into the disc where they polymerize in situ, forming a non-degradable, nuclear "implant" aimed at restoring disc height to degenerate discs, thereby relieving back pain. In vitro, these glycosaminoglycan analogues possess appropriate fixed charge density, hydration and osmotic responsiveness, thereby displaying the capacity to restore disc height and function. Preliminary biomechanical tests using a degenerate explant model showed that the implant adapts to the space into which it is injected and restores stiffness. These hydrogels mimic the role taken by glycosaminoglycans in vivo and, unlike other hydrogels, provide an intrinsic swelling pressure, which can maintain disc hydration and height under the high and variable compressive loads encountered in vivo. © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
In nonlinear and stochastic control problems, learning an efficient feed-forward controller is not amenable to conventional neurocontrol methods. For these approaches, estimating and then incorporating uncertainty in the controller and feed-forward models can produce more robust control results. Here, we introduce a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. A nonlinear multi-variable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non-Gaussian distributions of control signal as well as processes with hysteresis. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
A new approach is described herein, where neutron reflectivity measurements that probe changes in the density profile of thin films as they absorb material from the gas phase have been combined with a Love wave based gravimetric assay that measures the mass of absorbed material. This combination of techniques not only determines the spatial distribution of absorbed molecules, but also reveals the amount of void space within the thin film (a quantity that can be difficult to assess using neutron reflectivity measurements alone). The uptake of organic solvent vapours into spun cast films of polystyrene has been used as a model system with a view to this method having the potential for extension to the study of other systems. These could include, for example, humidity sensors, hydrogel swelling, biomolecule adsorption or transformations of electroactive and chemically reactive thin films. This is the first ever demonstration of combined neutron reflectivity and Love wave-based gravimetry and the experimental caveats, limitations and scope of the method are explored and discussed in detail.