14 resultados para 2-d Motion Analysis
em Aston University Research Archive
Resumo:
A preliminary study by Freeman et al (1996b) has suggested that when complex patterns of motion elicit impressions of 2-dimensionality, odd-item-out detection improves given targets can be differentiated on the basis of surface properties. Their results can be accounted for, it if is supposed that observers are permitted efficient access to 3-D surface descriptions but access to 2-D motion descriptions is restricted. To test the hypothesis, a standard search technique was employed, in which targets could be discussed on the basis of slant sign. In one experiment, slant impressions were induced through the summing of deformation and translation components. In a second theory were induced through the summing of shear and translation components. Neither showed any evidence of efficient access. A third experiment explored the possibility that access to these representations may have been hindered by a lack of grouping between the stimuli. Attempts to improve grouping failed to produce convincing evidence in support of life. An alternative explanation is that complex patterns of motion are simply not processed simultaneously. Psychophysical and physiological studies have, however, suggested that multiple mechanisms selective for complex motion do exist. Using a subthreshold summation technique I found evidence supporting the notion that complex motions are processed in parallel. Furthermore, in a spatial summation experiment, coherence thresholds were measured for displays containing different numbers of complex motion patches. Consistent with the idea that complex motion processing proceeds in parallel, increases in the number of motion patches were seen to decrease thresholds, both for expansion and rotation. Moreover, the rates of decrease were higher than those typically expected from probability summation, thus implying mechanisms are available, which can pool signals from spatially distinct complex motion flows.
Resumo:
Distributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tiltafter effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.
Resumo:
Motion discontinuities can signal object boundaries where few or no other cues, such as luminance, colour, or texture, are available. Hence, motion-defined contours are an ecologically important counterpart to luminance contours. We developed a novel motion-defined Gabor stimulus to investigate the nature of neural operators analysing visual motion fields in order to draw parallels with known luminance operators. Luminance-defined Gabors have been successfully used to discern the spatial-extent and spatial-frequency specificity of possible visual contour detectors. We now extend these studies into the motion domain. We define a stimulus using limited-lifetime moving dots whose velocity is described over 2-D space by a Gabor pattern surrounded by randomly moving dots. Participants were asked to determine whether the orientation of the Gabor pattern (and hence of the motion contours) was vertical or horizontal in a 2AFC task, and the proportion of correct responses was recorded. We found that with practice participants became highly proficient at this task, able in certain cases to reach 90% accuracy with only 12 limited-lifetime dots. However, for both practised and novice participants we found that the ability to detect a single boundary saturates with the size of the Gaussian envelope of the Gabor at approximately 5 deg full-width at half-height. At this optimal size we then varied spatial frequency and found the optimum was at the lowest measured spatial frequency (0.1 cycle deg-1 ) and then steadily decreased with higher spatial frequencies, suggesting that motion contour detectors may be specifically tuned to a single, isolated edge.
Resumo:
Preliminary work is reported on 2-D and 3-D microstructures written directly with a Yb:YAG 1026 nm femtosecond (fs) laser on bulk chemical vapour deposition (CVD) single-crystalline diamond. Smooth graphitic lines and other structures were written on the surface of a CVD diamond sample with a thickness of 0.7mm under low laser fluences. This capability opens up the opportunity for making electronic devices and micro-electromechanical structures on diamond substrates. The fabrication process was optimised through testing a range of laser energies at a 100 kHz repetition rate with sub-500fs pulses. These graphitic lines and structures have been characterised using optical microscopy, Raman spectroscopy, X-ray photoelectron spectroscopy and atomic force microscopy. Using these analysis techniques, the formation of sp2 and sp3 bonds is explored and the ratio between sp2 and sp3 bonds after fs laser patterning is quantified. We present the early findings from this study and characterise the relationship between the graphitic line formation and the different fs laser exposure conditions. © 2012 Taylor & Francis.
Resumo:
A series of N1-benzylidene pyridine-2-carboxamidrazone anti-tuberculosis compounds has been evaluated for their cytotoxicity using human mononuclear leucocytes (MNL) as target cells. All eight compounds were significantly more toxic than dimethyl sulphoxide control and isoniazid (INH) with the exception of a 4-methoxy-3-(2-phenylethyloxy) derivative, which was not significantly different in toxicity compared with INH. The most toxic agent was an ethoxy derivative, followed by 3-nitro, 4-methoxy, dimethylpropyl, 4-methylbenzyloxy, 3-methoxy-4-(-2-phenylethyloxy) and 4-benzyloxy in rank order. In comparison with the effect of selected carboxamidrazone agents on cells alone, the presence of either N-acetyl cysteine (NAC) or glutathione caused a significant reduction in the toxicity of INH, as well as on the 4-benzyloxy derivative, although both increased the toxicity of a 4-N,N-dimethylamino-1-naphthylidene and a 2-t-butylthio derivative. The derivatives from this and three previous studies were subjected to computational analysis in order to derive equations designed to establish quantitative structure activity relationships for these agents. Twenty-five compounds were thus resolved into two groups (1 and 2), which on analysis yielded equations with r2 values in the range 0.65-0.92. Group 1 shares a common mode of toxicity related to hydrophobicity, where cytotoxicity peaked at logP of 3.2, while Group 2 toxicity was strongly related to ionisation potential. The presence of thiols such as NAC and GSH both promoted and attenuated toxicity in selected compounds from Group 1, suggesting that secondary mechanisms of toxicity were operating. These studies will facilitate the design of future low toxicity high activity anti-tubercular carboxamidrazone agents. © 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The aim of the study is to characterize the local muscles motion in individuals undergoing whole body mechanical stimulation. In this study we aim also to evaluate how subject positioning modifies vibration dumping, altering local mechanical stimulus. Vibrations were delivered to subjects by the use of a vibrating platform, while stimulation frequency was increased linearly from 15 to 60Hz. Two different subject postures were here analysed. Platform and muscles motion were monitored using tiny MEMS accelerometers; a contra lateral analysis was also presented. Muscle motion analysis revealed typical displacement trajectories: motion components were found not to be purely sinusoidal neither in phase to each other. Results also revealed a mechanical resonant-like behaviour at some muscles, similar to a second-order system response. Resonance frequencies and dumping factors depended on subject and his positioning. Proper mechanical stimulation can maximize muscle spindle solicitation, which may produce a more effective muscle activation. © 2010 M. Cesarelli et al.
Resumo:
In stereo vision, regions with ambiguous or unspecified disparity can acquire perceived depth from unambiguous regions. This has been called stereo capture, depth interpolation or surface completion. We studied some striking induced depth effects suggesting that depth interpolation and surface completion are distinct stages of visual processing. An inducing texture (2-D Gaussian noise) had sinusoidal modulation of disparity, creating a smooth horizontal corrugation. The central region of this surface was replaced by various test patterns whose perceived corrugation was measured. When the test image was horizontal 1-D noise, shown to one eye or to both eyes without disparity, it appeared corrugated in much the same way as the disparity-modulated (DM) flanking regions. But when the test image was 2-D noise, or vertical 1-D noise, little or no depth was induced. This suggests that horizontal orientation was a key factor. For a horizontal sine-wave luminance grating, strong depth was induced, but for a square-wave grating, depth was induced only when its edges were aligned with the peaks and troughs of the DM flanking surface. These and related results suggest that disparity (or local depth) propagates along horizontal 1-D features, and then a 3-D surface is constructed from the depth samples acquired. The shape of the constructed surface can be different from the inducer, and so surface construction appears to operate on the results of a more local depth propagation process.
Resumo:
The effectiveness of rapid and controlled heating of intact tissue to inactivate native enzymatic activity and prevent proteome degradation has been evaluated. Mouse brains were bisected immediately following excision, with one hemisphere being heat treated followed by snap freezing in liquid nitrogen while the other hemisphere was snap frozen immediately. Sections were cut by cryostatic microtome and analyzed by MALDI-MS imaging and minimal label 2-D DIGE, to monitor time-dependent relative changes in intensities of protein and peptide signals. Analysis by MALDI-MS imaging demonstrated that the relative intensities of markers varied across a time course (0-5 min) when the tissues were not stabilized by heat treatment. However, the same markers were seen to be stabilized when the tissues were heat treated before snap freezing. Intensity profiles for proteins indicative of both degradation and stabilization were generated when samples of treated and nontreated tissues were analyzed by 2-D DIGE, with protein extracted before and after a 10-min warming of samples. Thus, heat treatment of tissues at the time of excision is shown to prevent subsequent uncontrolled degradation of tissues at the proteomic level before any quantitative analysis, and to be compatible with downstream proteomic analysis.
Resumo:
The fluid–particle interaction inside a 150 g/h fluidised bed reactor is modelled. The biomass particle is injected into the fluidised bed and the momentum transport from the fluidising gas and fluidised sand is modelled. The Eulerian approach is used to model the bubbling behaviour of the sand, which is treated as a continuum. The particle motion inside the reactor is computed using drag laws, dependent on the local volume fraction of each phase, according to the literature. FLUENT 6.2 has been used as the modelling framework of the simulations with a completely revised drag model, in the form of user defined function (UDF), to calculate the forces exerted on the particle as well as its velocity components. 2-D and 3-D simulations are tested and compared. The study is the first part of a complete pyrolysis model in fluidised bed reactors.
Resumo:
This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.
Resumo:
This study is concerned with labour productivity in traditional house building in Scotland. Productivity is a measure of the effective use of resources and provides vital benefits that can be combined in a number of ways. The introduction gives the background to two Scottish house building sites (Blantyre and Greenfield) that were surveyed by the Building Research Establishment (BEE) activity sampling method to provide the data for the study. The study had two main objectives; (1) summary data analysis in average manhours per house between all the houses on the site, and (2) detailed data analysis in average manhours for each house block on the site. The introduction also provides a literature review related to the objectives. The method is outlined in Chapter 2, the sites are discussed in Chapter 3, and Chapter 4 covers the method application on each site and a method development made in the study. The summary data analysis (Chapter 5) compares Blantyre and Greenfield, and two previous BEE surveys in England. The main detailed data analysis consisted of three forms, (Chapters 6, 7 and 8) each applied to a set of operations. The three forms of analysis were variations in average manhours per house for each house block on the site compared with; (1) block construction order, (2) average number of separate visits per house made by operatives to each block to complete an operation, and (3) average number of different operatives per house employed on an operation in each block. Three miscellaneous items of detail data analysis are discussed in Chapter 9. The conclusions to the whole study state that considerable variations in manhours for repeated operations were discovered, that the numbers of visits by operatives to complete operations were large and that the numbers of different operatives employed in some operations were a factor related to productivity. A critique of the activity sampling method suggests that the data produced is reliable in summary form and can give a good context for more detailed data collection. For future work, this could take the form of selected operations, with the context of an activity sampling survey, that wuld be intensively surveyed by other methods.
Resumo:
The aim of this work is to contribute to the analysis and characterization of training with whole body vibration (WBV) and the resultant neuromuscular response. WBV aims to mechanically activate muscle by eliciting stretch reflexes. Generally, surface electromyography is utilized to assess muscular response elicited by vibrations. However, EMG analysis could potentially bring to erroneous conclusions if not accurately filtered. Tiny and lightweight MEMS accelerometers were found helpful in monitoring muscle motion. Displacements were estimated integrating twice the acceleration data after gravity and small postural subject adjustments contribution removal. Results showed the relevant presence of motion artifacts on EMG recordings, the high correlation between muscle motion and EMG activity and how resonance frequencies and dumping factors depended on subject and his positioning onto the vibrating platform. Stimulations at the resonant frequency maximize muscles lengthening and in turn, muscle spindle solicitation , which may produce more muscle activation. Local mechanical stimulus characterization (Le, muscle motion analysis) could be meaningful in discovering proper muscle stimulation and may contribute to suggest appropriate and effective WBV exercise protocols. ©2009 IEEE.
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
Electromagnetic design of a 1.12-MW, 18 000-r/min high-speed permanent-magnet motor (HSPMM) is carried out based on the analysis of pole number, stator slot number, rotor outer diameter, air-gap length, permanent magnet material, thickness, and pole arc. The no-load and full-load performance of the HSPMM is investigated in this paper by using 2-D finite element method (FEM). In addition, the power losses in the HSPMM including core loss, winding loss, rotor eddy current loss, and air friction loss are predicted. Based on the analysis, a prototype motor is manufactured and experimentally tested to verify the machine design.