11 resultados para Hamming Cube

em Aston University Research Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The roots of the concept of cortical columns stretch far back into the history of neuroscience. The impulse to compartmentalise the cortex into functional units can be seen at work in the phrenology of the beginning of the nineteenth century. At the beginning of the next century Korbinian Brodmann and several others published treatises on cortical architectonics. Later, in the middle of that century, Lorente de No writes of chains of ‘reverberatory’ neurons orthogonal to the pial surface of the cortex and called them ‘elementary units of cortical activity’. This is the first hint that a columnar organisation might exist. With the advent of microelectrode recording first Vernon Mountcastle (1957) and then David Hubel and Torsten Wiesel provided evidence consistent with the idea that columns might constitute units of physiological activity. This idea was backed up in the 1970s by clever histochemical techniques and culminated in Hubel and Wiesel’s well-known ‘ice-cube’ model of the cortex and Szentogathai’s brilliant iconography. The cortical column can thus be seen as the terminus ad quem of several great lines of neuroscientific research: currents originating in phrenology and passing through cytoarchitectonics; currents originating in neurocytology and passing through Lorente de No. Famously, Huxley noted the tragedy of a beautiful hypothesis destroyed by an ugly fact. Famously, too, human visual perception is orientated toward seeing edges and demarcations when, perhaps, they are not there. Recently the concept of cortical columns has come in for the same radical criticism that undermined the architectonics of the early part of the twentieth century. Does history repeat itself? This paper reviews this history and asks the question.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three types of crushed rock aggregate were appraised, these being Carboniferous Sandstone, Magnesian Limestone and Jurassic Limestone. A comprehensive aggregate testing programme assessed the properties of these materials. Two series of specimen slabs were cast and power finished using recognised site procedures to assess firstly the influence of these aggregates as the coarse fraction, and secondly as the fine fraction. Each specimen slab was tested at 28 days under three regimes to simulate 2-body abrasion, 3-body abrasion and the effect of water on the abrasion of concrete. The abrasion resistance was measured using a recognised accelerated abrasion testing apparatus employing rotating steel wheels. Relationships between the aggregate and concrete properties and the abrasion resistance have been developed with the following properties being particularly important - Los Angeles Abrasion and grading of the coarse aggregate, hardness of the fine aggregate and water-cement ratio of the concrete. The sole use of cube strength as a measure of abrasion resistance has been shown to be unreliable by this work. A graphical method for predicting the potential abrasion resistance of concrete using various aggregate and concrete properties has been proposed. The effect of varying the proportion of low-grade aggregate in the mix has also been investigated. Possible mechanisms involved during abrasion have been discussed, including localised crushing and failure of the aggregate/paste bond. Aggregates from each of the groups were found to satisfy current specifications for direct finished concrete floors. This work strengthens the case for the increased use of low-grade aggregates in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis considers the computer simulation of moist agglomerate collisions using the discrete element method (DEM). The study is confined to pendular state moist agglomerates, at which liquid is presented as either absorbed immobile films or pendular liquid bridges and the interparticle force is modelled as the adhesive contact force and interstitial liquid bridge force. Algorithms used to model the contact force due to surface adhesion, tangential friction and particle deformation have been derived by other researchers and are briefly described in the thesis. A theoretical study of the pendular liquid bridge force between spherical particles has been made and the algorithms for the modelling of the pendular liquid bridge force between spherical particles have been developed and incorporated into the Aston version of the DEM program TRUBAL. It has been found that, for static liquid bridges, the more explicit criterion for specifying the stable solution and critical separation is provided by the total free energy. The critical separation is given by the cube root of liquid bridge volume to a good approximation and the 'gorge method' of evaluation based on the toroidal approximation leads to errors in the calculated force of less than 10%. Three dimensional computer simulations of an agglomerate impacting orthogonally with a wall are reported. The results demonstrate the effectiveness of adding viscous binder to prevent attrition, a common practice in process engineering. Results of simulated agglomerate-agglomerate collisions show that, for colinear agglomerate impacts, there is an optimum velocity which results in a near spherical shape of the coalesced agglomerate and, hence, minimises attrition due to subsequent collisions. The relationship between the optimum impact velocity and the liquid viscosity and surface tension is illustrated. The effect of varying the angle of impact on the coalescence/attrition behaviour is also reported. (DX 187, 340).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deformation microstructures in two batches of commercially pure copper (A and B) of allnost similar composition have been studied after rolling reductions from 5% to 95%. X- ray diffraction, optical metallography, scanning electron microscopy in the back-scattered mode, transmission and scanning electron microscopy have been used to examine the deformation microstructure. At low strains (~10 %) the deformation is accommodated by uniform octahedral slip. Microbands that occur as sheet like features usually on the {111} slip planes are formed after 10% reduction. The misorientations between rnicrobonds ond the matrix are usually small (1 - 2° ) and the dislocations within the bands suggest that a single slip system has been operative. The number of microbands increases with strain, they start to cluster and rotate after 60% reduction and, after 90 %, they become almost perfectly aligned with the rolling direction. There were no detectable differences in deformation microstructure between the two materials up to a deformation level of 60% but subsequently, copper B started to develop shear bands which became very profuse by 90% reduction. By contrast, copper A at this stage of deformation developed a smooth laminated structure. This difference in the deformation microstructures has been attributed to traces of unknown impurity in D which inhibit recovery of work hardening. The preferred orientations of both were typical of deformed copper although the presence of shear bands was associated wth a slightly weaker texture. The effects of rolling temperature and grain size on deformation microstructure were also investigated. It was concluded that lowering the rolling temperature or increasing the initial grain size encourages the material to develop shear bands after heavy deformation. Recovery and recrystallization have been studied in both materials during annealing. During recrystallization the growth of new grains showed quite different characteristics in the two cases. Where shear bands were present these acted as nucleation sites and produced a wide spread of recrystallized grain orientations. The resulting annealing textures were very weak. In the absence of shear bands, nucleation occurs by a remarkably long range bulging process which creates the cube orientation and an intensely sharp annealing texture. Cube oriented regions occur in long bands of highly elongated and well recovered cells which contain long range cumulative micorientations. They are transition bands with structural characteristics ideally suited for nucleation of recrystallization. Shear banding inhibits the cube texture both by creating alternative nuclei and by destroying the microstructural features necessary for cube nucleation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plantain (Banana-Musa AAB) is a widely growing but commercially underexploited tropical fruit. This study demonstrates the processing of plantain to flour and extends its use and convenience as a constituent of bread, cake and biscuit. Plantain was peeled, dried and milled to produce flour. Proximate analysis was carried out on the flour to determine the food composition. Drying at temperatures below 70ºC produced light coloured plantain flour. Experiments were carried out to determine the mechanism of drying, the heat and mass transfer coefficients, effect of air velocity, temperature and cube size on the rate of drying of plantain cubes. The drying was diffusion controlled. Pilot scale drying of plantain cubes in a cabinet dryer showed no significant increase of drying rate above 70ºC. In the temperature range found most suitable for plantain drying (ie 60 to 70ºC) the total drying time was adequately predicted using a modified equation based on Fick's Law provided the cube temperature was taken to be about 5ºC below the actual drying air temperature. Studies of baking properties of plantain flour revealed that plantain flour can be substituted for strong wheat flour up to 15% for bread making and up to 50% for madeira cake. A shortcake biscuit was produced using 100% plantain flour and test-marketed. Detailed economic studies showed that the production of plantain fruit and its processing into flour would be economically viable in Nigeria when the flour is sold at the wholesale price of NO.65 per kilogram provided a minimum sale of 25% plantain suckers. There is need for government subsidy if plantain flour is to compete with imported wheat flour. The broader economic benefits accruing from the processing of plantain fruit into flour and its use in bakery products include employment opportunity, savings in foreign exchange and stimulus to home agriculture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

If, as is widely believed, schizophrenia is characterized by abnormalities of brain functional connectivity, then it seems reasonable to expect that different subtypes of schizophrenia could be discriminated in the same way. However, evidence for differences in functional connectivity between the subtypes of schizophrenia is largely lacking and, where it exists, it could be accounted for by clinical differences between the patients (e.g. medication) or by the limitations of the measures used. In this study, we measured EEG functional connectivity in unmedicated male patients diagnosed with either positive or negative syndrome schizophrenia and compared them with age and sex matched healthy controls. Using new methodology (Medkour et al., 2009) based on partial coherence, brain connectivity plots were constructed for positive and negative syndrome patients and controls. Reliable differences in the pattern of functional connectivity were found with both syndromes showing not only an absence of some of the connections that were seen in controls but also the presence of connections that the controls did not show. Comparing connectivity graphs using the Hamming distance, the negative-syndrome patients were found to be more distant from the controls than were the positive syndrome patients. Bootstrap distributions of these distances were created which showed a significant difference in the mean distances that was consistent with the observation that negative-syndrome diagnosis is associated with a more severe form of schizophrenia. We conclude that schizophrenia is characterized by widespread changes in functional connectivity with negative syndrome patients showing a more extreme pattern of abnormality than positive syndrome patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanoindentation has become a common technique for measuring the hardness and elastic-plastic properties of materials, including coatings and thin films. In recent years, different nanoindenter instruments have been commercialised and used for this purpose. Each instrument is equipped with its own analysis software for the derivation of the hardness and reduced Young's modulus from the raw data. These data are mostly analysed through the Oliver and Pharr method. In all cases, the calibration of compliance and area function is mandatory. The present work illustrates and describes a calibration procedure and an approach to raw data analysis carried out for six different nanoindentation instruments through several round-robin experiments. Three different indenters were used, Berkovich, cube corner, spherical, and three standardised reference samples were chosen, hard fused quartz, soft polycarbonate, and sapphire. It was clearly shown that the use of these common procedures consistently limited the hardness and reduced the Young's modulus data spread compared to the same measurements performed using instrument-specific procedures. The following recommendations for nanoindentation calibration must be followed: (a) use only sharp indenters, (b) set an upper cut-off value for the penetration depth below which measurements must be considered unreliable, (c) perform nanoindentation measurements with limited thermal drift, (d) ensure that the load-displacement curves are as smooth as possible, (e) perform stiffness measurements specific to each instrument/indenter couple, (f) use Fq and Sa as calibration reference samples for stiffness and area function determination, (g) use a function, rather than a single value, for the stiffness and (h) adopt a unique protocol and software for raw data analysis in order to limit the data spread related to the instruments (i.e. the level of drift or noise, defects of a given probe) and to make the H and E r data intercomparable. © 2011 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a 3-dimensional simulation of the effect of particle shape on char entrainment in a bubbling fluidised bed reactor. Three char particles of 350 μm side length but of different shapes (cube, sphere, and tetrahedron) are injected into the fluidised bed and the momentum transport from the fluidising gas and fluidised sand is modelled. Due to the fluidising conditions, reactor design and particle shape the char particles will either be entrained from the reactor or remain inside the bubbling bed. The sphericity of the particles is the factor that differentiates the particle motion inside the reactor and their efficient entrainment out of it. The simulation has been performed with a completely revised momentum transport model for bubble three-phase flow, taking into account the sphericity factors, and has been applied as an extension to the commercial finite volume code FLUENT 6.3. © 2010 Elsevier B.V.All rights reserved.