14 resultados para diagnostic techniques and procedure
em CentAUR: Central Archive University of Reading - UK
Resumo:
Samples of whole crop wheat (WCW, n = 134) and whole crop barley (WCB, n = 16) were collected from commercial farms in the UK over a 2-year period (2003/2004 and 2004/2005). Near infrared reflectance spectroscopy (NIRS) was compared with laboratory and in vitro digestibility measures to predict digestible organic matter in the dry matter (DOMD) and metabolisable energy (ME) contents measured in vivo using sheep. Spectral models using the mean spectra of two scans were compared with those using individual spectra (duplicate spectra). Overall NIRS accurately predicted the concentration of chemical components in whole crop cereals apart from crude protein. ammonia-nitrogen, water-soluble carbohydrates, fermentation acids and solubility values. In addition. the spectral models had higher prediction power for in vivo DOMD and ME than chemical components or in vitro digestion methods. Overall there Was a benefit from the use of duplicate spectra rather than mean spectra and this was especially so for predicting in vivo DOMD and ME where the sample population size was smaller. The spectral models derived deal equally well with WCW and WCB and Would he of considerable practical value allowing rapid determination of nutritive value of these forages before their use in diets of productive animals. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.
Resumo:
Although modern control techniques such as eigenstructure assignment have been given extensive coverage in control literature there is a reluctance to use them in practice as they are often not believed to be as `visible' or as simple as classical methods. A simple aircraft example is used, and it is shown that eigenstructure assignment can be used easily to produce a more viable controller than with simple classical techniques.
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 × 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture–recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture–recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.
Resumo:
The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 x 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture-recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture-recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
This paper presents findings of our study on peer-reviewed papers published in the International Conference on Persuasive Technology from 2006 to 2010. The study indicated that out of 44 systems reviewed, 23 were reported to be successful, 2 to be unsuccessful and 19 did not specify whether or not it was successful. 56 different techniques were mentioned and it was observed that most designers use ad hoc definitions for techniques or methods used in design. Hence we propose the need for research to establish unambiguous definitions of techniques and methods in the field.
Resumo:
Khartoum like many cities in least developing countries (LDCs) still witnesses huge influx of people. Accommodation of the new comers leads to encroachment on the cultivation land leads to sprawl expansion of Greater Khartoum. The city expanded in diameter from 16.8 km in 1955 to 802.5 km in 1998. Most of this horizontal expansion was residential. In 2008 Khartoum accommodated 29% of the urban population of Sudan. Today Khartoum is considered as one of 43 major cities in Africa that accommodates more than 1 million inhabitants. Most of new comers live in the outskirts of the city e.g. Dar El-Salam and Mayo neighbourhoods. The majority of those new comers built their houses especially the walls from mud, wood, straw and sacks. Selection of building materials usually depends on its price regardless of the environmental impact, quality, thermal performance and life of the material. Most of the time, this results in increasing the cost with variables of impacts over the environment during the life of the building. Therefore, consideration of the environmental impacts, social impacts and economic impacts is crucial in the selection of any building material. Decreasing such impacts could lead to more sustainable housing. Comparing the sustainability of the available wall building materials for low cost housing in Khartoum is carried out through the life cycle assessment (LCA) technique. The purpose of this paper is to compare the most available local building materials for walls for the urban poor of Khartoum from a sustainability point of view by going through the manufacturing of the materials, the use of these materials and then the disposal of the materials after their life comes to an end. Findings reveal that traditional red bricks couldn’t be considered as a sustainable wall building material that will draw the future of the low cost housing in Greater Khartoum. On the other hand, results of the comparison lead to draw attention to the wide range of the soil techniques and to its potentials to be a promising sustainable wall material for urban low cost housing in Khartoum.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.