268 resultados para Hamming Cube
Resumo:
Full-field Fourier-domain optical coherence tomography (3F-OCT) is a full-field version of spectral domain/swept source optical coherence tomography. A set of two-dimensional Fourier holograms is recorded at discrete wavenumbers spanning the swept source tuning range. The resultant three-dimensional data cube contains comprehensive information on the three-dimensional spatial properties of the sample, including its morphological layout and optical scatter. The morphological layout can be reconstructed in software via three-dimensional discrete Fourier transformation. The spatial resolution of the 3F-OCT reconstructed image, however, is degraded due to the presence of a phase cross-term, whose origin and effects are addressed in this paper. We present a theoretical and experimental study of the imaging performance of 3F-OCT, with particular emphasis on elimination of the deleterious effects of the phase cross-term.
Resumo:
We report a new approach in optical coherence tomography (OCT) called full-field Fourier-domain OCT (3F-OCT). A three-dimensional image of a sample is obtained by digital reconstruction of a three-dimensional data cube, acquired with a Fourier holography recording system, illuminated with a swept source. We present a theoretical and experimental study of the signal-to-noise ratio of the 3F-OCT approach versus serial image acquisition (flying-spot OCT) approach. (c) 2005 Optical Society of America.
Resumo:
New copper(II) complexes of general empirical formula, Cu(mpsme)X center dot xCH(3)COCH(3) (mpsme = anionic form of the 6-methyl-2-formylpyridine Schiff base of S-methyldithiocarbazate; X = Cl, N-3, NCS, NO3; x = 0, 0.5) have been synthesized and characterized by IR, electronic, EPR and susceptibility measurements. Room temperature mu(eff) values for the complexes are in the range 1.75-2.1 mu(beta) typical of uncoupled or weakly coupled Cu(II) centres. The EPR spectra of the [Cu(mpsme)X] (X = Cl, N-3, NO3, NCS) complexes reveal a tetragonally distorted coordination sphere around the mononuclear Cu(II) centre. We have exploited second derivative EPR spectra in conjunction with Fourier filtering (sine bell and Hamming functions) to extract all of the nitrogen hyperfine coupling matrices. While the X-ray crystallography of [Cu(mpsme)NCS] reveals a linear polymer in which the thiocyanate anion bridges the two copper(II) ions, the EPR spectra in solution are typical of a magnetically isolated monomeric Cu(II) centres indicating dissociation of the polymeric chain in solution. The structures of the free ligand, Hmpsme and the {[Cu(mpsme)NO3] center dot 0.5CH(3)COCH(3)}(2) and [Cu(mpsme)NCS](n) complexes have been determined by X-ray diffraction. The {[Cu(mpsme)NO3]0.5CH(3)COCH(3)}(2) complex is a centrosymmetric dimer in which each copper atom adopts a five-coordinate distorted square-pyramidal geometry with an N2OS2 coordination environment, the Schiff base coordinating as a uninegatively charged tridentate ligand chelating through the pyridine and azomethine nitrogen atoms and the thiolate, an oxygen atom of a unidentate nitrato ligand and a bridging sulfur atom from the second ligand completing the coordination sphere. The [Cu(mpsme)(NCS)](n) complex has a novel staircase-like one dimensional polymeric structure in which the NCS- ligands bridge two adjacent copper(II) ions asymmetrically in an end-to-end fashion providing its nitrogen atom to one copper and the sulfur atom to the other. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Necessary conditions for the complete graph on n vertices to have a decomposition into 5-cubes are that 5 divides it - 1 and 80 divides it (it - 1)/2. These are known to be sufficient when n is odd. We prove them also sufficient for it even, thus completing the spectrum problem for the 5-cube and lending further weight to a long-standing conjecture of Kotzig. (c) 2005 Wiley Periodicals, Inc.
Resumo:
We are developing a telemedicine application which offers automated diagnosis of facial (Bell's) palsy through a Web service. We used a test data set of 43 images of facial palsy patients and 44 normal people to develop the automatic recognition algorithm. Three different image pre-processing methods were used. Machine learning techniques (support vector machine, SVM) were used to examine the difference between the two halves of the face. If there was a sufficient difference, then the SVM recognized facial palsy. Otherwise, if the halves were roughly symmetrical, the SVM classified the image as normal. It was found that the facial palsy images had a greater Hamming Distance than the normal images, indicating greater asymmetry. The median distance in the normal group was 331 (interquartile range 277-435) and the median distance in the facial palsy group was 509 (interquartile range 334-703). This difference was significant (P
Resumo:
This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system acheives a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.
Resumo:
Full-field Fourier-domain optical coherence tomography (3F-OCT) is a full-field version of spectraldomain/swept-source optical coherence tomography. A set of two-dimensional Fourier holograms is recorded at discrete wavenumbers spanning the swept-source tuning range. The resultant three-dimensional data cube contains comprehensive information on the three-dimensional morphological layout of the sample that can be reconstructed in software via three-dimensional discrete Fourier-transform. This method of recording of the OCT signal confers signal-to-noise ratio improvement in comparison with "flying-spot" time-domain OCT. The spatial resolution of the 3F-OCT reconstructed image, however, is degraded due to the presence of a phase cross-term, whose origin and effects are addressed in this paper. We present theoretical and experimental study of imaging performance of 3F-OCT, with particular emphasis on elimination of the deleterious effects of the phase cross-term.
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
The roots of the concept of cortical columns stretch far back into the history of neuroscience. The impulse to compartmentalise the cortex into functional units can be seen at work in the phrenology of the beginning of the nineteenth century. At the beginning of the next century Korbinian Brodmann and several others published treatises on cortical architectonics. Later, in the middle of that century, Lorente de No writes of chains of ‘reverberatory’ neurons orthogonal to the pial surface of the cortex and called them ‘elementary units of cortical activity’. This is the first hint that a columnar organisation might exist. With the advent of microelectrode recording first Vernon Mountcastle (1957) and then David Hubel and Torsten Wiesel provided evidence consistent with the idea that columns might constitute units of physiological activity. This idea was backed up in the 1970s by clever histochemical techniques and culminated in Hubel and Wiesel’s well-known ‘ice-cube’ model of the cortex and Szentogathai’s brilliant iconography. The cortical column can thus be seen as the terminus ad quem of several great lines of neuroscientific research: currents originating in phrenology and passing through cytoarchitectonics; currents originating in neurocytology and passing through Lorente de No. Famously, Huxley noted the tragedy of a beautiful hypothesis destroyed by an ugly fact. Famously, too, human visual perception is orientated toward seeing edges and demarcations when, perhaps, they are not there. Recently the concept of cortical columns has come in for the same radical criticism that undermined the architectonics of the early part of the twentieth century. Does history repeat itself? This paper reviews this history and asks the question.
Resumo:
In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.
Resumo:
Three types of crushed rock aggregate were appraised, these being Carboniferous Sandstone, Magnesian Limestone and Jurassic Limestone. A comprehensive aggregate testing programme assessed the properties of these materials. Two series of specimen slabs were cast and power finished using recognised site procedures to assess firstly the influence of these aggregates as the coarse fraction, and secondly as the fine fraction. Each specimen slab was tested at 28 days under three regimes to simulate 2-body abrasion, 3-body abrasion and the effect of water on the abrasion of concrete. The abrasion resistance was measured using a recognised accelerated abrasion testing apparatus employing rotating steel wheels. Relationships between the aggregate and concrete properties and the abrasion resistance have been developed with the following properties being particularly important - Los Angeles Abrasion and grading of the coarse aggregate, hardness of the fine aggregate and water-cement ratio of the concrete. The sole use of cube strength as a measure of abrasion resistance has been shown to be unreliable by this work. A graphical method for predicting the potential abrasion resistance of concrete using various aggregate and concrete properties has been proposed. The effect of varying the proportion of low-grade aggregate in the mix has also been investigated. Possible mechanisms involved during abrasion have been discussed, including localised crushing and failure of the aggregate/paste bond. Aggregates from each of the groups were found to satisfy current specifications for direct finished concrete floors. This work strengthens the case for the increased use of low-grade aggregates in the future.
Resumo:
This thesis considers the computer simulation of moist agglomerate collisions using the discrete element method (DEM). The study is confined to pendular state moist agglomerates, at which liquid is presented as either absorbed immobile films or pendular liquid bridges and the interparticle force is modelled as the adhesive contact force and interstitial liquid bridge force. Algorithms used to model the contact force due to surface adhesion, tangential friction and particle deformation have been derived by other researchers and are briefly described in the thesis. A theoretical study of the pendular liquid bridge force between spherical particles has been made and the algorithms for the modelling of the pendular liquid bridge force between spherical particles have been developed and incorporated into the Aston version of the DEM program TRUBAL. It has been found that, for static liquid bridges, the more explicit criterion for specifying the stable solution and critical separation is provided by the total free energy. The critical separation is given by the cube root of liquid bridge volume to a good approximation and the 'gorge method' of evaluation based on the toroidal approximation leads to errors in the calculated force of less than 10%. Three dimensional computer simulations of an agglomerate impacting orthogonally with a wall are reported. The results demonstrate the effectiveness of adding viscous binder to prevent attrition, a common practice in process engineering. Results of simulated agglomerate-agglomerate collisions show that, for colinear agglomerate impacts, there is an optimum velocity which results in a near spherical shape of the coalesced agglomerate and, hence, minimises attrition due to subsequent collisions. The relationship between the optimum impact velocity and the liquid viscosity and surface tension is illustrated. The effect of varying the angle of impact on the coalescence/attrition behaviour is also reported. (DX 187, 340).
Resumo:
Deformation microstructures in two batches of commercially pure copper (A and B) of allnost similar composition have been studied after rolling reductions from 5% to 95%. X- ray diffraction, optical metallography, scanning electron microscopy in the back-scattered mode, transmission and scanning electron microscopy have been used to examine the deformation microstructure. At low strains (~10 %) the deformation is accommodated by uniform octahedral slip. Microbands that occur as sheet like features usually on the {111} slip planes are formed after 10% reduction. The misorientations between rnicrobonds ond the matrix are usually small (1 - 2° ) and the dislocations within the bands suggest that a single slip system has been operative. The number of microbands increases with strain, they start to cluster and rotate after 60% reduction and, after 90 %, they become almost perfectly aligned with the rolling direction. There were no detectable differences in deformation microstructure between the two materials up to a deformation level of 60% but subsequently, copper B started to develop shear bands which became very profuse by 90% reduction. By contrast, copper A at this stage of deformation developed a smooth laminated structure. This difference in the deformation microstructures has been attributed to traces of unknown impurity in D which inhibit recovery of work hardening. The preferred orientations of both were typical of deformed copper although the presence of shear bands was associated wth a slightly weaker texture. The effects of rolling temperature and grain size on deformation microstructure were also investigated. It was concluded that lowering the rolling temperature or increasing the initial grain size encourages the material to develop shear bands after heavy deformation. Recovery and recrystallization have been studied in both materials during annealing. During recrystallization the growth of new grains showed quite different characteristics in the two cases. Where shear bands were present these acted as nucleation sites and produced a wide spread of recrystallized grain orientations. The resulting annealing textures were very weak. In the absence of shear bands, nucleation occurs by a remarkably long range bulging process which creates the cube orientation and an intensely sharp annealing texture. Cube oriented regions occur in long bands of highly elongated and well recovered cells which contain long range cumulative micorientations. They are transition bands with structural characteristics ideally suited for nucleation of recrystallization. Shear banding inhibits the cube texture both by creating alternative nuclei and by destroying the microstructural features necessary for cube nucleation.
Resumo:
Plantain (Banana-Musa AAB) is a widely growing but commercially underexploited tropical fruit. This study demonstrates the processing of plantain to flour and extends its use and convenience as a constituent of bread, cake and biscuit. Plantain was peeled, dried and milled to produce flour. Proximate analysis was carried out on the flour to determine the food composition. Drying at temperatures below 70ºC produced light coloured plantain flour. Experiments were carried out to determine the mechanism of drying, the heat and mass transfer coefficients, effect of air velocity, temperature and cube size on the rate of drying of plantain cubes. The drying was diffusion controlled. Pilot scale drying of plantain cubes in a cabinet dryer showed no significant increase of drying rate above 70ºC. In the temperature range found most suitable for plantain drying (ie 60 to 70ºC) the total drying time was adequately predicted using a modified equation based on Fick's Law provided the cube temperature was taken to be about 5ºC below the actual drying air temperature. Studies of baking properties of plantain flour revealed that plantain flour can be substituted for strong wheat flour up to 15% for bread making and up to 50% for madeira cake. A shortcake biscuit was produced using 100% plantain flour and test-marketed. Detailed economic studies showed that the production of plantain fruit and its processing into flour would be economically viable in Nigeria when the flour is sold at the wholesale price of NO.65 per kilogram provided a minimum sale of 25% plantain suckers. There is need for government subsidy if plantain flour is to compete with imported wheat flour. The broader economic benefits accruing from the processing of plantain fruit into flour and its use in bakery products include employment opportunity, savings in foreign exchange and stimulus to home agriculture.