10 resultados para colours
em CentAUR: Central Archive University of Reading - UK
Resumo:
A rapid capillary electrophoresis method was developed simultaneously to determine artificial sweeteners, preservatives and colours used as additives in carbonated soft drinks. Resolution between all additives occurring together in soft drinks was successfully achieved within a 15-min run-time by employing the micellar electrokinetic chromatography mode with a 20 mM carbonate buffer at pH 9.5 as the aqueous phase and 62 mM sodium dodecyl sulfate as the micellar phase. By using a diode-array detector to monitor the UV-visible range (190-600 nm), the identity of sample components, suggested by migration time, could be confirmed by spectral matching relative to standards.
Resumo:
The Neolithic chambered tombs of Bohuslan on the west coast of Sweden were built out of locally occurring raw materials. These exhibit a wide variety of colours, textures and mineral inclusions, and all were used to contrive a series of striking visual effects. Certain of these would have been apparent to the casual observer but others would only have been apparent to someone inside the passage or the burial chamber. There is no evidence that the materials were organized according to a single scheme. Rather, they permitted a series of improvisations, so that no two monuments were exactly alike. The effects that they created are compared with those found in megalithic art where the design elements were painted or carved, but in Bohuslan all the designs were created using the natural properties of the rock.
Resumo:
Three ochre samples (A (orange-red in colour), B (red) and C (purple)) from Clearwell Caves, (Gloucestershire, UK) have been examined using an integrated analytical methodology based on the techniques of IR and diffuse reflectance UV-visible-NIR spectroscopy, X-ray diffraction, elemental analysis by ICP-AES and particle size analysis. It is shown that the chromophore in each case is haematite. The differences in colour may be accounted for by (i) different mineralogical and chemical composition in the case of the orange ochre, where hi,,her levels of dolomite and copper are seen and (ii) an unusual particle size distribution in the case of the purple ochre. When the purple ochre was ground to give the same particle size distribution as the red ochre then the colours of the two samples became indistinguishable. An analysis has now been completed of a range of ochre samples with colours from yellow to purple from the important site of Clearwell Caves. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Three ochre samples (A (orange-red in colour), B (red) and C (purple)) from Clearwell Caves, (Gloucestershire, UK) have been examined using an integrated analytical methodology based on the techniques of IR and diffuse reflectance UV-visible-NIR spectroscopy, X-ray diffraction, elemental analysis by ICP-AES and particle size analysis. It is shown that the chromophore in each case is haematite. The differences in colour may be accounted for by (i) different mineralogical and chemical composition in the case of the orange ochre, where hi,,her levels of dolomite and copper are seen and (ii) an unusual particle size distribution in the case of the purple ochre. When the purple ochre was ground to give the same particle size distribution as the red ochre then the colours of the two samples became indistinguishable. An analysis has now been completed of a range of ochre samples with colours from yellow to purple from the important site of Clearwell Caves. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Conducts a study into how contrast could be established when using colours frequently used in everyday environments, and how different adjacent colours had to be in terms of chromaticity, saturation and/or hue in order for a difference to be discerned between them by fully sighted people and most visually impaired people. Location within a building where contrast would have the greatest benefits is considered. Relates the philosophy behind design procedures and decisions to meet the objectives.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
We present results from 30 nights of observations of the open cluster NGC 7789 with the Wide Field Camera on the Isaac Newton Telescope, La Palma. From ~900 epochs, we obtained light curves and Sloan r'-i' colours for ~33000 stars, with ~2400 stars having better than 1 per cent precision. We expected to detect ~2 transiting hot Jupiter planets if 1 per cent of stars host such a companion and a typical hot Jupiter radius is ~1.2R_J. We find 24 transit candidates, 14 of which we can assign a period. We rule out the transiting planet model for 21 of these candidates using various robust arguments. For two candidates, we are unable to decide on their nature, although it seems most likely that they are eclipsing binaries as well. We have one candidate exhibiting a single eclipse, for which we derive a radius of 1.81+0.09-0.00R_J. Three candidates remain that require follow-up observations in order to determine their nature.
Resumo:
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Colour relationalism holds that the colours are constituted by relations to subjects. Anti-relationalists have claimed that this view stands in stark contrast to our phenomenally-informed, pre-theoretic intuitions. Is this claim right? Cohen and Nichols’ recent empirical study suggests not, as about half of their participants seemed to be relationalists about colour. Despite Cohen and Nichols’ study, we think that the anti-relationalist’s claim is correct. We explain why there are good reasons to suspect that Cohen and Nichols’ experimental design skewed their results in favour of relationalism. We then run an improved study and find that most of our participants seem to be anti-relationalists. We find some other interesting things too. Our results suggest that the majority of ordinary people find it no less intuitive that colours are objective than that shapes are objective. We also find some evidence that when those with little philosophical training are asked about the colours of objects, their intuitions about colour and shape cases are similar, but when asked about people’s colour ascriptions, their intuitions about colour and shape cases differ.