14 resultados para Hilbert Cube

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). © 2004 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The roots of the concept of cortical columns stretch far back into the history of neuroscience. The impulse to compartmentalise the cortex into functional units can be seen at work in the phrenology of the beginning of the nineteenth century. At the beginning of the next century Korbinian Brodmann and several others published treatises on cortical architectonics. Later, in the middle of that century, Lorente de No writes of chains of ‘reverberatory’ neurons orthogonal to the pial surface of the cortex and called them ‘elementary units of cortical activity’. This is the first hint that a columnar organisation might exist. With the advent of microelectrode recording first Vernon Mountcastle (1957) and then David Hubel and Torsten Wiesel provided evidence consistent with the idea that columns might constitute units of physiological activity. This idea was backed up in the 1970s by clever histochemical techniques and culminated in Hubel and Wiesel’s well-known ‘ice-cube’ model of the cortex and Szentogathai’s brilliant iconography. The cortical column can thus be seen as the terminus ad quem of several great lines of neuroscientific research: currents originating in phrenology and passing through cytoarchitectonics; currents originating in neurocytology and passing through Lorente de No. Famously, Huxley noted the tragedy of a beautiful hypothesis destroyed by an ugly fact. Famously, too, human visual perception is orientated toward seeing edges and demarcations when, perhaps, they are not there. Recently the concept of cortical columns has come in for the same radical criticism that undermined the architectonics of the early part of the twentieth century. Does history repeat itself? This paper reviews this history and asks the question.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three types of crushed rock aggregate were appraised, these being Carboniferous Sandstone, Magnesian Limestone and Jurassic Limestone. A comprehensive aggregate testing programme assessed the properties of these materials. Two series of specimen slabs were cast and power finished using recognised site procedures to assess firstly the influence of these aggregates as the coarse fraction, and secondly as the fine fraction. Each specimen slab was tested at 28 days under three regimes to simulate 2-body abrasion, 3-body abrasion and the effect of water on the abrasion of concrete. The abrasion resistance was measured using a recognised accelerated abrasion testing apparatus employing rotating steel wheels. Relationships between the aggregate and concrete properties and the abrasion resistance have been developed with the following properties being particularly important - Los Angeles Abrasion and grading of the coarse aggregate, hardness of the fine aggregate and water-cement ratio of the concrete. The sole use of cube strength as a measure of abrasion resistance has been shown to be unreliable by this work. A graphical method for predicting the potential abrasion resistance of concrete using various aggregate and concrete properties has been proposed. The effect of varying the proportion of low-grade aggregate in the mix has also been investigated. Possible mechanisms involved during abrasion have been discussed, including localised crushing and failure of the aggregate/paste bond. Aggregates from each of the groups were found to satisfy current specifications for direct finished concrete floors. This work strengthens the case for the increased use of low-grade aggregates in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis considers the computer simulation of moist agglomerate collisions using the discrete element method (DEM). The study is confined to pendular state moist agglomerates, at which liquid is presented as either absorbed immobile films or pendular liquid bridges and the interparticle force is modelled as the adhesive contact force and interstitial liquid bridge force. Algorithms used to model the contact force due to surface adhesion, tangential friction and particle deformation have been derived by other researchers and are briefly described in the thesis. A theoretical study of the pendular liquid bridge force between spherical particles has been made and the algorithms for the modelling of the pendular liquid bridge force between spherical particles have been developed and incorporated into the Aston version of the DEM program TRUBAL. It has been found that, for static liquid bridges, the more explicit criterion for specifying the stable solution and critical separation is provided by the total free energy. The critical separation is given by the cube root of liquid bridge volume to a good approximation and the 'gorge method' of evaluation based on the toroidal approximation leads to errors in the calculated force of less than 10%. Three dimensional computer simulations of an agglomerate impacting orthogonally with a wall are reported. The results demonstrate the effectiveness of adding viscous binder to prevent attrition, a common practice in process engineering. Results of simulated agglomerate-agglomerate collisions show that, for colinear agglomerate impacts, there is an optimum velocity which results in a near spherical shape of the coalesced agglomerate and, hence, minimises attrition due to subsequent collisions. The relationship between the optimum impact velocity and the liquid viscosity and surface tension is illustrated. The effect of varying the angle of impact on the coalescence/attrition behaviour is also reported. (DX 187, 340).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deformation microstructures in two batches of commercially pure copper (A and B) of allnost similar composition have been studied after rolling reductions from 5% to 95%. X- ray diffraction, optical metallography, scanning electron microscopy in the back-scattered mode, transmission and scanning electron microscopy have been used to examine the deformation microstructure. At low strains (~10 %) the deformation is accommodated by uniform octahedral slip. Microbands that occur as sheet like features usually on the {111} slip planes are formed after 10% reduction. The misorientations between rnicrobonds ond the matrix are usually small (1 - 2° ) and the dislocations within the bands suggest that a single slip system has been operative. The number of microbands increases with strain, they start to cluster and rotate after 60% reduction and, after 90 %, they become almost perfectly aligned with the rolling direction. There were no detectable differences in deformation microstructure between the two materials up to a deformation level of 60% but subsequently, copper B started to develop shear bands which became very profuse by 90% reduction. By contrast, copper A at this stage of deformation developed a smooth laminated structure. This difference in the deformation microstructures has been attributed to traces of unknown impurity in D which inhibit recovery of work hardening. The preferred orientations of both were typical of deformed copper although the presence of shear bands was associated wth a slightly weaker texture. The effects of rolling temperature and grain size on deformation microstructure were also investigated. It was concluded that lowering the rolling temperature or increasing the initial grain size encourages the material to develop shear bands after heavy deformation. Recovery and recrystallization have been studied in both materials during annealing. During recrystallization the growth of new grains showed quite different characteristics in the two cases. Where shear bands were present these acted as nucleation sites and produced a wide spread of recrystallized grain orientations. The resulting annealing textures were very weak. In the absence of shear bands, nucleation occurs by a remarkably long range bulging process which creates the cube orientation and an intensely sharp annealing texture. Cube oriented regions occur in long bands of highly elongated and well recovered cells which contain long range cumulative micorientations. They are transition bands with structural characteristics ideally suited for nucleation of recrystallization. Shear banding inhibits the cube texture both by creating alternative nuclei and by destroying the microstructural features necessary for cube nucleation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plantain (Banana-Musa AAB) is a widely growing but commercially underexploited tropical fruit. This study demonstrates the processing of plantain to flour and extends its use and convenience as a constituent of bread, cake and biscuit. Plantain was peeled, dried and milled to produce flour. Proximate analysis was carried out on the flour to determine the food composition. Drying at temperatures below 70ºC produced light coloured plantain flour. Experiments were carried out to determine the mechanism of drying, the heat and mass transfer coefficients, effect of air velocity, temperature and cube size on the rate of drying of plantain cubes. The drying was diffusion controlled. Pilot scale drying of plantain cubes in a cabinet dryer showed no significant increase of drying rate above 70ºC. In the temperature range found most suitable for plantain drying (ie 60 to 70ºC) the total drying time was adequately predicted using a modified equation based on Fick's Law provided the cube temperature was taken to be about 5ºC below the actual drying air temperature. Studies of baking properties of plantain flour revealed that plantain flour can be substituted for strong wheat flour up to 15% for bread making and up to 50% for madeira cake. A shortcake biscuit was produced using 100% plantain flour and test-marketed. Detailed economic studies showed that the production of plantain fruit and its processing into flour would be economically viable in Nigeria when the flour is sold at the wholesale price of NO.65 per kilogram provided a minimum sale of 25% plantain suckers. There is need for government subsidy if plantain flour is to compete with imported wheat flour. The broader economic benefits accruing from the processing of plantain fruit into flour and its use in bakery products include employment opportunity, savings in foreign exchange and stimulus to home agriculture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose and analyze a flat-top pulse generator based on a fiber Bragg grating (FBG) in transmission. As is shown in the examples, a uniform period FBG properly designed can exhibit a spectral response in transmission close to sinc function (in amplitude and phase) in a certain bandwidth, because of the logarithm Hilbert transform relations, which can be used to reshape a Gaussian-like input pulse into a flat-top pulse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical differentiators constitute a basic device for analog all-optical signal processing [1]. Fiber grating approaches, both fiber Bragg grating (FBG) and long period grating (LPG), constitute an attractive solution because of their low cost, low insertion losses, and full compatibility with fiber optic systems. A first order differentiator LPG approach was proposed and demonstrated in [2], but FBGs may be preferred in applications with a bandwidth up to few nm because of the extreme sensitivity of LPGs to environmental fluctuations [3]. Several FBG approaches have been proposed in [3-6], requiring one or more additional optical elements to create a first-order differentiator. A very simple, single optical element FBG approach was proposed in [7] for first order differentiation, applying the well-known logarithmic Hilbert transform relation of the amplitude and phase of an FBG in transmission [8]. Using this relationship in the design process, it was theoretically and numerically demonstrated that a single FBG in transmission can be designed to simultaneously approach the amplitude and phase of a first-order differentiator spectral response, without need of any additional elements. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanoindentation has become a common technique for measuring the hardness and elastic-plastic properties of materials, including coatings and thin films. In recent years, different nanoindenter instruments have been commercialised and used for this purpose. Each instrument is equipped with its own analysis software for the derivation of the hardness and reduced Young's modulus from the raw data. These data are mostly analysed through the Oliver and Pharr method. In all cases, the calibration of compliance and area function is mandatory. The present work illustrates and describes a calibration procedure and an approach to raw data analysis carried out for six different nanoindentation instruments through several round-robin experiments. Three different indenters were used, Berkovich, cube corner, spherical, and three standardised reference samples were chosen, hard fused quartz, soft polycarbonate, and sapphire. It was clearly shown that the use of these common procedures consistently limited the hardness and reduced the Young's modulus data spread compared to the same measurements performed using instrument-specific procedures. The following recommendations for nanoindentation calibration must be followed: (a) use only sharp indenters, (b) set an upper cut-off value for the penetration depth below which measurements must be considered unreliable, (c) perform nanoindentation measurements with limited thermal drift, (d) ensure that the load-displacement curves are as smooth as possible, (e) perform stiffness measurements specific to each instrument/indenter couple, (f) use Fq and Sa as calibration reference samples for stiffness and area function determination, (g) use a function, rather than a single value, for the stiffness and (h) adopt a unique protocol and software for raw data analysis in order to limit the data spread related to the instruments (i.e. the level of drift or noise, defects of a given probe) and to make the H and E r data intercomparable. © 2011 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a 3-dimensional simulation of the effect of particle shape on char entrainment in a bubbling fluidised bed reactor. Three char particles of 350 μm side length but of different shapes (cube, sphere, and tetrahedron) are injected into the fluidised bed and the momentum transport from the fluidising gas and fluidised sand is modelled. Due to the fluidising conditions, reactor design and particle shape the char particles will either be entrained from the reactor or remain inside the bubbling bed. The sphericity of the particles is the factor that differentiates the particle motion inside the reactor and their efficient entrainment out of it. The simulation has been performed with a completely revised momentum transport model for bubble three-phase flow, taking into account the sphericity factors, and has been applied as an extension to the commercial finite volume code FLUENT 6.3. © 2010 Elsevier B.V.All rights reserved.