71 resultados para Observational techniques and algorithms
em CentAUR: Central Archive University of Reading - UK
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
Samples of whole crop wheat (WCW, n = 134) and whole crop barley (WCB, n = 16) were collected from commercial farms in the UK over a 2-year period (2003/2004 and 2004/2005). Near infrared reflectance spectroscopy (NIRS) was compared with laboratory and in vitro digestibility measures to predict digestible organic matter in the dry matter (DOMD) and metabolisable energy (ME) contents measured in vivo using sheep. Spectral models using the mean spectra of two scans were compared with those using individual spectra (duplicate spectra). Overall NIRS accurately predicted the concentration of chemical components in whole crop cereals apart from crude protein. ammonia-nitrogen, water-soluble carbohydrates, fermentation acids and solubility values. In addition. the spectral models had higher prediction power for in vivo DOMD and ME than chemical components or in vitro digestion methods. Overall there Was a benefit from the use of duplicate spectra rather than mean spectra and this was especially so for predicting in vivo DOMD and ME where the sample population size was smaller. The spectral models derived deal equally well with WCW and WCB and Would he of considerable practical value allowing rapid determination of nutritive value of these forages before their use in diets of productive animals. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Distributed computing paradigms for sharing resources such as Clouds, Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. While there are some success stories such as PlanetLab, OneLab, BOINC, BitTorrent, and SETI@home, a widespread use of these technologies for business applications has not yet been achieved. In a business environment, mechanisms are needed to provide incentives to potential users for participating in such networks. These mechanisms may range from simple non-monetary access rights, monetary payments to specific policies for sharing. Although a few models for a framework have been discussed (in the general area of a "Grid Economy"), none of these models has yet been realised in practice. This book attempts to fill this gap by discussing the reasons for such limited take-up and exploring incentive mechanisms for resource sharing in distributed systems. The purpose of this book is to identify research challenges in successfully using and deploying resource sharing strategies in open-source and commercial distributed systems.
Resumo:
Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.
Resumo:
Although modern control techniques such as eigenstructure assignment have been given extensive coverage in control literature there is a reluctance to use them in practice as they are often not believed to be as `visible' or as simple as classical methods. A simple aircraft example is used, and it is shown that eigenstructure assignment can be used easily to produce a more viable controller than with simple classical techniques.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Progress on “Changing coastlines: data assimilation for morphodynamic prediction and predictability”
Resumo:
The task of assessing the likelihood and extent of coastal flooding is hampered by the lack of detailed information on near-shore bathymetry. This is required as an input for coastal inundation models, and in some cases the variability in the bathymetry can impact the prediction of those areas likely to be affected by flooding in a storm. The constant monitoring and data collection that would be required to characterise the near-shore bathymetry over large coastal areas is impractical, leaving the option of running morphodynamic models to predict the likely bathymetry at any given time. However, if the models are inaccurate the errors may be significant if incorrect bathymetry is used to predict possible flood risks. This project is assessing the use of data assimilation techniques to improve the predictions from a simple model, by rigorously incorporating observations of the bathymetry into the model, to bring the model closer to the actual situation. Currently we are concentrating on Morecambe Bay as a primary study site, as it has a highly dynamic inter-tidal zone, with changes in the course of channels in this zone impacting the likely locations of flooding from storms. We are working with SAR images, LiDAR, and swath bathymetry to give us the observations over a 2.5 year period running from May 2003 – November 2005. We have a LiDAR image of the entire inter-tidal zone for November 2005 to use as validation data. We have implemented a 3D-Var data assimilation scheme, to investigate the improvements in performance of the data assimilation compared to the previous scheme which was based on the optimal interpolation method. We are currently evaluating these different data assimilation techniques, using 22 SAR data observations. We will also include the LiDAR data and swath bathymetry to improve the observational coverage, and investigate the impact of different types of observation on the predictive ability of the model. We are also assessing the ability of the data assimilation scheme to recover the correct bathymetry after storm events, which can dramatically change the bathymetry in a short period of time.
Resumo:
Tracer gas techniques have been the most appropriate experimental method of determining airflows and ventilation rates in houses. However, current trends to reduce greenhouse gas effects have prompted the need for alternative techniques, such as passive sampling. In this research passive sampling techniques have been used to demonstrate the potential to fulfil these requirements by using solutions of volatile organic compounds (VOCs) and solid phase microextraction (SPME) fibres. These passive sampling techniques have been calibrated against tracer gas decay techniques and measurements from a standard orifice plate. Two constant sources of volatile organic compounds were diffused into two sections of a humidity chamber and sampled using SPME fibres. From a total of four SPME fibres (two in each section), reproducible results were obtained. Emission rates and air movement from one section to the other were predicted using developed algorithms. Comparison of the SPME fibre technique with that of the tracer gas technique and measurements from an orifice plate showed similar results with good precision and accuracy. With these fibres, infiltration rates can be measured over grab samples in a time weighted averaged period lasting from 10 minutes up to several days. Key words: passive samplers, solid phase microextraction fibre, tracer gas techniques, airflow, air infiltration, houses.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.
Resumo:
This paper presents findings of our study on peer-reviewed papers published in the International Conference on Persuasive Technology from 2006 to 2010. The study indicated that out of 44 systems reviewed, 23 were reported to be successful, 2 to be unsuccessful and 19 did not specify whether or not it was successful. 56 different techniques were mentioned and it was observed that most designers use ad hoc definitions for techniques or methods used in design. Hence we propose the need for research to establish unambiguous definitions of techniques and methods in the field.
Resumo:
Khartoum like many cities in least developing countries (LDCs) still witnesses huge influx of people. Accommodation of the new comers leads to encroachment on the cultivation land leads to sprawl expansion of Greater Khartoum. The city expanded in diameter from 16.8 km in 1955 to 802.5 km in 1998. Most of this horizontal expansion was residential. In 2008 Khartoum accommodated 29% of the urban population of Sudan. Today Khartoum is considered as one of 43 major cities in Africa that accommodates more than 1 million inhabitants. Most of new comers live in the outskirts of the city e.g. Dar El-Salam and Mayo neighbourhoods. The majority of those new comers built their houses especially the walls from mud, wood, straw and sacks. Selection of building materials usually depends on its price regardless of the environmental impact, quality, thermal performance and life of the material. Most of the time, this results in increasing the cost with variables of impacts over the environment during the life of the building. Therefore, consideration of the environmental impacts, social impacts and economic impacts is crucial in the selection of any building material. Decreasing such impacts could lead to more sustainable housing. Comparing the sustainability of the available wall building materials for low cost housing in Khartoum is carried out through the life cycle assessment (LCA) technique. The purpose of this paper is to compare the most available local building materials for walls for the urban poor of Khartoum from a sustainability point of view by going through the manufacturing of the materials, the use of these materials and then the disposal of the materials after their life comes to an end. Findings reveal that traditional red bricks couldn’t be considered as a sustainable wall building material that will draw the future of the low cost housing in Greater Khartoum. On the other hand, results of the comparison lead to draw attention to the wide range of the soil techniques and to its potentials to be a promising sustainable wall material for urban low cost housing in Khartoum.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
A combination of in situ and ex situ X-ray scattering techniques and transmission electron microscopy has been used to study the crystallization behaviour of polyethylene, following the imposition of melt shear. In the case of a branched material, the imposition of shear flow up to a rate of 30 s(-1) was found to induce no anisotropy. Although shearing the linear material only ever induced a very small degree of anisotropy in the melt, for shear rates > 0.15 s(-1), subsequent crystallization resulted in increasing anisotropy. Blends of the above two polyethylenes were produced, in which the linear material constituted the minority fraction (similar to 10%). Isothermal crystallization at temperatures where extensive crystallization of the branched material does not occur demonstrated that the behaviour of the linear component of the sheared blend mirrored that of the linear polyethylene alone. However, in addition, it was found that when crystallized in the presence of an oriented morphology, the branched polymer also formed anisotropic structures. We have termed the process templating, in which the crystallization behaviour of the bulk of the system (similar to 90% branched material) is completely altered (spherulitic to oriented lamellar) by mapping it onto a pre-existing minority structure (similar to 10% linear polymer). (c) 2006 Elsevier Ltd. All rights reserved.