792 resultados para science experiments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the development and preliminary experimental evaluation of a visionbased docking system to allow an Autonomous Underwater Vehicle (AUV) to identify and attach itself to a set of uniquely identifiable targets. These targets, docking poles, are detected using Haar rectangular features and rotation of integral images. A non-holonomic controller allows the Starbug AUV to orient itself with respect to the target whilst maintaining visual contact during the manoeuvre. Experimental results show the proposed vision system is capable of robustly identifying a pair of docking poles simultaneously in a variety of orientations and lighting conditions. Experiments in an outdoor pool show that this vision system enables the AUV to dock autonomously from a distance of up to 4m with relatively low visibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on an inter-comparison of six different hygroscopicity tandem differential mobility analysers (HTDMAs). These HTDMAs are used worldwide in laboratories and in field campaigns to measure the water uptake of aerosol particles and were never intercompared. After an investigation of the different design of the instruments with their advantages and inconveniencies, the methods for calibration, validation and analysis are presented. Measurements of nebulised ammonium sulphate as well as of secondary organic aerosol generated from a smog chamber were performed. Agreement and discrepancies between the instrument and to the theory are discussed, and final recommendations for a standard instrument are given, as a benchmark for laboratory or field experiments to ensure a high quality of HTDMA data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work was to investigate ultrafine particles (< 0.1 μm) in primary school classrooms, in relation to the classrooms activities. The investigations were conducted in three classrooms during two measuring campaigns, which together encompassed a period of 60 days. Initial investigations showed that under the normal operating conditions of the school there were many occasions in all three classrooms where indoor particle concentrations increased significantly compared to outdoor levels. By far the highest increases in the classroom resulted from art activities (painting, gluing and drawing), at times reaching over 1.4 x 105 particle cm-3. The indoor particle concentrations exceeded outdoor concentrations by approximately one order of magnitude, with a count median diameter ranging from 20-50 nm. Significant increases also occurred during cleaning activities, when detergents were used. GC-MS analysis conducted on 4 samples randomly selected from about 30 different paints and glues, as well as the detergent used in the school, showed that d-limonene was one of the main organic compounds of the detergent, however, it was not detected in the samples of the paints and the glue. Controlled experiments showed that this monoterpene, emitted from the detergent, reacted with O3 (at outdoor ambient concentrations ranging from 0.06-0.08ppm) and formed secondary organic aerosols. Further investigations to identify other liquids which may be potential sources of the precursors of secondary organic aerosols, were outside the scope of this project, however, it is expected that the problem identified by this study could be more widely spread, since most primary schools use liquid materials for art classes, and all schools use detergents for cleaning. Further studies are therefore recommended to better understand this phenomenon and also to minimize school children exposure to ultrafine particles from these indoor sources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the years, people have often held the hypothesis that negative feedback should be very useful for largely improving the performance of information filtering systems; however, we have not obtained very effective models to support this hypothesis. This paper, proposes an effective model that use negative relevance feedback based on a pattern mining approach to improve extracted features. This study focuses on two main issues of using negative relevance feedback: the selection of constructive negative examples to reduce the space of negative examples; and the revision of existing features based on the selected negative examples. The former selects some offender documents, where offender documents are negative documents that are most likely to be classified in the positive group. The later groups the extracted features into three groups: the positive specific category, general category and negative specific category to easily update the weight. An iterative algorithm is also proposed to implement this approach on RCV1 data collections, and substantial experiments show that the proposed approach achieves encouraging performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brief overview of topics/issues of interest end of 2009, including Spatial Science Students undertake Variety of Research Projects; labs and offices on the move again); Congratulations to Surveying Student Project- QSEA awards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Science has been under attack in the last thirty years, and recently a number of prominent scientists have been busy fighting back. Here, an argument is presented that the `science wars' stem from an unreasonably strict adherence to the reductive method on the part of science, but that weakening this stance need not imply a lapse into subjectivity. One possible method for formalising the description of non-separable, contextually dependent complex systems is presented. This is based upon a quantum-like approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Node-based Local Mesh Generation (NLMG) algorithm, which is free of mesh inconsistency, is one of core algorithms in the Node-based Local Finite Element Method (NLFEM) to achieve the seamless link between mesh generation and stiffness matrix calculation, and the seamless link helps to improve the parallel efficiency of FEM. Furthermore, the key to ensure the efficiency and reliability of NLMG is to determine the candidate satellite-node set of a central node quickly and accurately. This paper develops a Fast Local Search Method based on Uniform Bucket (FLSMUB) and a Fast Local Search Method based on Multilayer Bucket (FLSMMB), and applies them successfully to the decisive problems, i.e. presenting the candidate satellite-node set of any central node in NLMG algorithm. Using FLSMUB or FLSMMB, the NLMG algorithm becomes a practical tool to reduce the parallel computation cost of FEM. Parallel numerical experiments validate that either FLSMUB or FLSMMB is fast, reliable and efficient for their suitable problems and that they are especially effective for computing the large-scale parallel problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We extended an earlier study (Vision Research, 45, 1967–1974, 2005) in which we investigated limits at which induced blur of letter targets becomes noticeable, troublesome and objectionable. Here we used a deformable adaptive optics mirror to vary spherical defocus for conditions of a white background with correction of astigmatism; a white background with reduction of all aberrations other than defocus; and a monochromatic background with reduction of all aberrations other than defocus. We used seven cyclopleged subjects, lines of three high-contrast letters as targets, 3–6 mm artificial pupils, and 0.1–0.6 logMAR letter sizes. Subjects used a method of adjustment to control the defocus component of the mirror to set the 'just noticeable', 'just troublesome' and 'just objectionable' defocus levels. For the white-no adaptive optics condition combined with 0.1 logMAR letter size, mean 'noticeable' blur limits were ±0.30, ±0.24 and ±0.23 D at 3, 4 and 6 mm pupils, respectively. White-adaptive optics and monochromatic-adaptive optics conditions reduced blur limits by 8% and 20%, respectively. Increasing pupil size from 3–6 mm decreased blur limits by 29%, and increasing letter size increased blur limits by 79%. Ratios of troublesome to noticeable, and of objectionable to noticeable, blur limits were 1.9 and 2.7 times, respectively. The study shows that the deformable mirror can be used to vary defocus in vision experiments. Overall, the results of noticeable, troublesome and objectionable blur agreed well with those of the previous study. Attempting to reduce higher-order aberrations or chromatic aberrations, reduced blur limits to only a small extent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We determined the foveal Stiles-Crawford effect (SCE) as a function of up to 8D accommodation stimulus in six young emmetropes and six young myopes using a psychophysical two-channel Maxwellian system in which the threshold luminance increment of a 1 mm spot entering through variable positions in the pupil was determined against a background formed by a 4 mm spot entering the pupil centrally. The SCE became steeper in both groups with increasing accommodation stimulus, but with no systematic shift of the peak. Combining the data of both groups gave significant increases in directionality of 15-20% in horizontal and vertical pupil meridians with 6D of accommodation. However, additional experiments indicated that much of this was an artefact of higher order aberrations and accommodative lag. Thus, there appears to be little changes in orientation or directionality in the SCE with accommodation stimulus levels up to 6 D, but it is possible that changes may occur at very high accommodation levels

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper, first presented at a symposium on the 'past, present and future of cultural studies,' traces disciplinary changes in the study of culture from the perspective of 'cultural science', a term that was used by some of the earliest practitioners of cultural studies, including Raymond Williams. The paper goes on to describe some of the present moment, including work on the creative industries, show that a new version of cultural science is needed, based on evolutionary principles, in dialogue with the evolutionary approach in economics that was called for a century ago by Thorstein Veblen. This evolutionary turn, or 'cultural science 2.0,' it is argued, offers a radical and challenging future for cultural studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.