432 resultados para Automatic term extraction
Resumo:
Object segmentation is one of the fundamental steps for a number of robotic applications such as manipulation, object detection, and obstacle avoidance. This paper proposes a visual method for incorporating colour and depth information from sequential multiview stereo images to segment objects of interest from complex and cluttered environments. Rather than segmenting objects using information from a single frame in the sequence, we incorporate information from neighbouring views to increase the reliability of the information and improve the overall segmentation result. Specifically, dense depth information of a scene is computed using multiple view stereo. Depths from neighbouring views are reprojected into the reference frame to be segmented compensating for imperfect depth computations for individual frames. The multiple depth layers are then combined with color information from the reference frame to create a Markov random field to model the segmentation problem. Finally, graphcut optimisation is employed to infer pixels belonging to the object to be segmented. The segmentation accuracy is evaluated over images from an outdoor video sequence demonstrating the viability for automatic object segmentation for mobile robots using monocular cameras as a primary sensor.
Resumo:
Our research explores the design of networked technologies to facilitate local suburban communications and to encourage people to engage with their local community. While there are many investigations of interaction designs for networked technologies, most research utilises small exercises, workshops or other short-term studies to investigate interaction designs. However, we have found these short-term methods to be ineffective in the context of understanding local community interaction. Moreover we find that people are resistant to putting their time into workshops and exercises, understandably so because these are academic practices, not local community practices. Our contribution is to detail a long term embedded design approach in which we interact with the community over the long term in the course of normal community goings-on with an evolving exploratory prototype. This paper discusses the embedded approach to working in the wild for extended field research.
Resumo:
In this study we set out to dissociate the developmental time course of automatic symbolic number processing and cognitive control functions in grade 1-3 British primary school children. Event-related potential (ERP) and behavioral data were collected in a physical size discrimination numerical Stroop task. Task-irrelevant numerical information was processed automatically already in grade 1. Weakening interference and strengthening facilitation indicated the parallel development of general cognitive control and automatic number processing. Relationships among ERP and behavioral effects suggest that control functions play a larger role in younger children and that automaticity of number processing increases from grade 1 to 3.
Resumo:
Rule extraction from neural network algorithms have been investigated for two decades and there have been significant applications. Despite this level of success, rule extraction from neural network methods are generally not part of data mining tools, and a significant commercial breakthrough may still be some time away. This paper briefly reviews the state-of-the-art and points to some of the obstacles, namely a lack of evaluation techniques in experiments and larger benchmark data sets. A significant new development is the view that rule extraction from neural networks is an interactive process which actively involves the user. This leads to the application of assessment and evaluation techniques from information retrieval which may lead to a range of new methods.
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
This is a methodologically exemplary trial of a population based (universal) approach to preventing depression in young people. The programme used teachers in a classroom setting to deliver cognitive behavioural problem solving skills to a cohort of students. We have little knowledge about “best practice” to prevent depression in adolescence. Classroom-based universal approaches appear to offer advantages in recruitment rates and lack of stigmatisation over approaches that target specific groups of at risk students. Earlier research on a universal school-based approach to preventing depression in adolescents showed promise, but employed mental health professionals to teach cognitive behavioural coping skills in small groups.1 Using such an approach routinely would be economically unsustainable. Spence’s trial, with teachers as facilitators, therefore represents a “real world” intervention that could be routinely disseminated.
Resumo:
Feature extraction and selection are critical processes in developing facial expression recognition (FER) systems. While many algorithms have been proposed for these processes, direct comparison between texture, geometry and their fusion, as well as between multiple selection algorithms has not been found for spontaneous FER. This paper addresses this issue by proposing a unified framework for a comparative study on the widely used texture (LBP, Gabor and SIFT) and geometric (FAP) features, using Adaboost, mRMR and SVM feature selection algorithms. Our experiments on the Feedtum and NVIE databases demonstrate the benefits of fusing geometric and texture features, where SIFT+FAP shows the best performance, while mRMR outperforms Adaboost and SVM. In terms of computational time, LBP and Gabor perform better than SIFT. The optimal combination of SIFT+FAP+mRMR also exhibits a state-of-the-art performance.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
Contact lenses are a common method for the correction of refractive errors of the eye. While there have been significant advancements in contact lens designs and materials over the past few decades, the lenses still represent a foreign object in the ocular environment and may lead to physiological as well as mechanical effects on the eye. When contact lenses are placed in the eye, the ocular anatomical structures behind and in front of the lenses are directly affected. This thesis presents a series of experiments that investigate the mechanical and physiological effects of the short-term use of contact lenses on anterior and posterior corneal topography, corneal thickness, the eyelids, tarsal conjunctiva and tear film surface quality. The experimental paradigm used in these studies was a repeated measures, cross-over study design where subjects wore various types of contact lenses on different days and the lenses were varied in one or more key parameters (e.g. material or design). Both, old and newer lens materials were investigated, soft and rigid lenses were used, high and low oxygen permeability materials were tested, toric and spherical lens designs were examined, high and low powers and small and large diameter lenses were used in the studies. To establish the natural variability in the ocular measurements used in the studies, each experiment also contained at least one “baseline” day where an identical measurement protocol was followed, with no contact lenses worn. In this way, changes associated with contact lens wear were considered in relation to those changes that occurred naturally during the 8 hour period of the experiment. In the first study, the regional distribution and magnitude of change in corneal thickness and topography was investigated in the anterior and posterior cornea after short-term use of soft contact lenses in 12 young adults using the Pentacam. Four different types of contact lenses (Silicone hydrogel/ Spherical/–3D, Silicone Hydrogel/Spherical/–7D, Silicone Hydrogel/Toric/–3D and HEMA/Toric/–3D) of different materials, designs and powers were worn for 8 hours each, on 4 different days. The natural diurnal changes in corneal thickness and curvature were measured on two separate days before any contact lens wear. Significant diurnal changes in corneal thickness and curvature within the duration of the study were observed and these were taken into consideration for calculating the contact lens induced corneal changes. Corneal thickness changed significantly with lens wear and the greatest corneal swelling was seen with the hydrogel (HEMA) toric lens with a noticeable regional swelling of the cornea beneath the stabilization zones, the thickest regions of the lenses. The anterior corneal surface generally showed a slight flattening with lens wear. All contact lenses resulted in central posterior corneal steepening, which correlated with the relative degree of corneal swelling. The corneal swelling induced by the silicone hydrogel contact lenses was typically less than the natural diurnal thinning of the cornea over this same period (i.e. net thinning). This highlights why it is important to consider the natural diurnal variations in corneal thickness observed from morning to afternoon to accurately interpret contact lens induced corneal swelling. In the second experiment, the relative influence of lenses of different rigidity (polymethyl methacrylate – PMMA, rigid gas permeable – RGP and silicone hydrogel – SiHy) and diameters (9.5, 10.5 and 14.0) on corneal thickness, topography, refractive power and wavefront error were investigated. Four different types of contact lenses (PMMA/9.5, RGP/9.5, RGP/10.5, SiHy/14.0), were worn by 14 young healthy adults for a period of 8 hours on 4 different days. There was a clear association between fluorescein fitting pattern characteristics (i.e. regions of minimum clearance in the fluorescein pattern) and the resulting corneal shape changes. PMMA lenses resulted in significant corneal swelling (more in the centre than periphery) along with anterior corneal steepening and posterior flattening. RGP lenses, on the other hand, caused less corneal swelling (more in the periphery than centre) along with opposite effects on corneal curvature, anterior corneal flattening and posterior steepening. RGP lenses also resulted in a clinically and statistically significant decrease in corneal refractive power (ranging from 0.99 to 0.01 D), large enough to affect vision and require adjustment in the lens power. Wavefront analysis also showed a significant increase in higher order aberrations after PMMA lens wear, which may partly explain previous reports of "spectacle blur" following PMMA lens wear. We further explored corneal curvature, thickness and refractive changes with back surface toric and spherical RGP lenses in a group of 6 subjects with toric corneas. The lenses were worn for 8 hours and measurements were taken before and after lens wear, as in previous experiments. Both lens types caused anterior corneal flattening and a decrease in corneal refractive power but the changes were greater with the spherical lens. The spherical lens also caused a significant decrease in WTR astigmatism (WRT astigmatism defined as major axis within 30 degrees of horizontal). Both the lenses caused slight posterior corneal steepening and corneal swelling, with a greater effect in the periphery compared to the central cornea. Eyelid position, lid-wiper and tarsal conjunctival staining were also measured in Experiment 2 after short-term use of the rigid and SiHy contact lenses. Digital photos of the external eyes were captured for lid position analysis. The lid-wiper region of the marginal conjunctiva was stained using fluorescein and lissamine green dyes and digital photos were graded by an independent masked observer. A grading scale was developed in order to describe the tarsal conjunctival staining. A significant decrease in the palpebral aperture height (blepharoptosis) was found after wearing of PMMA/9.5 and RGP/10.5 lenses. All three rigid contact lenses caused a significant increase in lid-wiper and tarsal staining after 8 hours of lens wear. There was also a significant diurnal increase in tarsal staining, even without contact lens wear. These findings highlight the need for better contact lens edge design to minimise the interactions between the lid and contact lens edge during blinking and more lubricious contact lens surfaces to reduce ocular surface micro-trauma due to friction and for. Tear film surface quality (TFSQ) was measured using a high-speed videokeratoscopy technique in Experiment 2. TFSQ was worse with all the lenses compared to baseline (PMMA/9.5, RGP/9.5, RGP/10.5, and SiHy/14) in the afternoon (after 8 hours) during normal and suppressed blinking conditions. The reduction in TFSQ was similar with all the contact lenses used, irrespective of their material and diameter. An unusual pattern of change in TFSQ in suppressed blinking conditions was also found. The TFSQ with contact lens was found to decrease until a certain time after which it improved to a value even better than the bare eye. This is likely to be due to the tear film drying completely over the surface of the contact lenses. The findings of this study also show that there is still a scope for improvement in contact lens materials in terms of better wettability and hydrophilicity in order to improve TFSQ and patient comfort. These experiments showed that a variety of changes can occur in the anterior eye as a result of the short-term use of a range of commonly used contact lens types. The greatest corneal changes occurred with lenses manufactured from older HEMA and PMMA lens materials, whereas modern SiHy and rigid gas permeable materials caused more subtle changes in corneal shape and thickness. All lenses caused signs of micro-trauma to the eyelid wiper and palpebral conjunctiva, although rigid lenses appeared to cause more significant changes. Tear film surface quality was also significantly reduced with all types of contact lenses. These short-term changes in the anterior eye are potential markers for further long term changes and the relative differences between lens types that we have identified provide an indication of areas of contact lens design and manufacture that warrant further development.
Resumo:
The chapter reflects on the first two years of the Restart Scheme introduced by the Manpower Services Commission for Long term unemployed people in the UK from a facilitator's perspective ten years later. It examines the actual weekly program for participants with some case examples from one of the pilot centres, Crawley College, West Sussex, an area of low unemployment. The observations suggested that even in a place where there are many job vacancies, there will be a 3-4% of the population who are unable to compete for jobs and participate in the work force unless sheltered workshops and specialized training initiatives are established.
Resumo:
Background In Booth v Amaca Pty Ltd and Amaba Pty Ltd,1 the New South Wales Dust Diseases Tribunal awarded a retired motor mechanic $326 640 in damages for his malignant pleural mesothelioma allegedly caused by exposure to asbestos through working with the brake linings manufactured by the defendants. The evidence before the Tribunal was that the plaintiff had been exposed to asbestos prior to working as a mechanic from home renovations when he was a child and loading a truck as a youth. However, as a mechanic he had been exposed to asbestos in brake linings on which he worked from 1953 to 1983. Curtis DCJ held at [172] that the asbestos from the brake linings ‘materially contributed to [the plaintiff’s] contraction of mesothelioma’. This decision was based upon acceptance that the effect of exposure to asbestos on the development of mesothelioma was cumulative and rejection of theory that a single fibre of asbestos can cause the disease...
Resumo:
A high sensitive fiber Bragg grating (FBG) strain sensor with automatic temperature compensation is demonstrated. FBG is axially linked with a stick and their free ends are fixed to the measured object. When the measured strain changes, the stick does not change in length, but the FBG does. When the temperature changes, the stick changes in length to pull the FBG to realize temperature compensation. In experiments, 1.45 times strain sensitivity of bare FBG with temperature compensation of less than 0.1 nm Bragg wavelength drift over 100 ◦C shift is achieved.
Resumo:
An enhanced mill extraction model has been developed to calculate mill performance parameters and to predict the extraction performance of a milling unit. The model takes into account the fibre suspended in juice streams and calculates filling ratio, reabsorption factor, imbibition coefficient, and separation efficiency using more complete definitions than those used in previous extraction models. A mass balance model is used to determine the fibre, brix and moisture mass flows between milling units so that a complete milling train, including the return stream from the juice screen, is modelled. Model solutions are presented to determine the effect of different levels of fibre in juice and efficiency of fibre separation in the juice screen on brix extraction. The model provides more accurate results than earlier models leading to better understanding and improvement of the milling process.
Resumo:
The quality of discovered features in relevance feedback (RF) is the key issue for effective search query. Most existing feedback methods do not carefully address the issue of selecting features for noise reduction. As a result, extracted noisy features can easily contribute to undesirable effectiveness. In this paper, we propose a novel feature extraction method for query formulation. This method first extract term association patterns in RF as knowledge for feature extraction. Negative RF is then used to improve the quality of the discovered knowledge. A novel information filtering (IF) model is developed to evaluate the proposed method. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics confirm that the proposed model achieved encouraging performance compared to state-of-the-art IF models.