52 resultados para post-processing method

em CentAUR: Central Archive University of Reading - UK


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

If soy isoflavones are to be effective in preventing or treating a range of diseases, they must be bioavailable, and thus understanding factors which may alter their bioavailability needs to be elucidated. However, to date there is little information on whether the pharmacokinetic profile following ingestion of a defined dose is influenced by the food matrix in which the isoflavone is given or by the processing method used. Three different foods (cookies, chocolate bars and juice) were prepared, and their isoflavone contents were determined. We compared the urinary and serum concentrations of daidzein, genistein and equol following the consumption of three different foods, each of which contained 50 mg of isoflavones. After the technological processing of the different test foods, differences in aglycone levels were observed. The plasma levels of the isoflavone precursor daidzein were not altered by food matrix. Urinary daidzein recovery was similar for all three foods ingested with total urinary output of 33-34% of ingested dose. Peak genistein concentrations were attained in serum earlier following consumption of a liquid matrix rather than a solid matrix, although there was a lower total urinary recovery of genistein following ingestion of juice than that of the two other foods. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle filtering approach gives access to the probability density of the state vector, which allows us to determine the predictive power — an entropy based measure — of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of different sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from different locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the first meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Milk is the largest source of iodine in UK diets and an earlier study showed that organic summer milk had significantly lower iodine concentration than conventional milk. There are no comparable studies with winter milk or the effect of milk fat class or heat processing method. Two retail studies with winter milk are reported. Study 1 showed no effect of fat class but organic milk was 32.2% lower in iodine than conventional milk (404 vs. 595 μg/L; P < 0.001). Study 2 found no difference between conventional and Channel Island milk but organic milk contained 35.5% less iodine than conventional milk (474 vs. 306 μg/L; P < 0.001). UHT and branded organic milk also had lower iodine concentrations than conventional milk (331 μg/L; P < 0.001 and 268 μg/L: P < 0.0001 respectively). The results indicate that replacement of conventional milk by organic or UHT milk will increase the risk of sub-optimal iodine status especially for pregnant/lactating women.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Baby leaf salads are gaining in popularity over traditional whole head lettuce salads in response to consumer demand for greater variety and convenience in their diet. Baby lettuce leaves are mixed, washed and packaged as whole leaves, with a shelf-life of approximately 10 days post-processing. End of shelf-life, as determined by the consumer, is typified by bruising, water-logging and blackening of the leaves, but the biological events causing this phenotype have not been studied to date. We investigated the physiological and ultrastructural characteristics during postharvest shelf-life of two lettuce varieties with very different leaf morphologies. Membrane disruption was an important determinant of cell death in both varieties. although the timing and characteristics of breakdown was different in each with Lollo rossa showing signs of aging such as thylakoid disruption and plastoglobuli accumulation earlier than Cos. Membranes in Lollo rossa showed a later, but more distinct increase in permeability than in Cos. as indicated by electrolyte leakage and the presence of cytoplasmic fragments in the vacuole, but Cos membranes show distinct fractures towards the end of shelf-life. The tissue lost less than 25% fresh weight during shelf-life and there was little protein loss compared to developmentally aging leaves in an ambient environment. Biophysical measurements showed that breakstrength was significantly reduced in Lollo rossa, whereas irreversible leaf plasticity was significantly reduced in Cos leaves. The reversible elastic properties of both varieties changed throughout shelf-life. We compared the characteristics of shelf-life in both varieties of bagged lettuce leaves with other leafy salad crops and discuss the potential targets for future work to improve postharvest quality of baby leaf lettuce. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recent report in Consciousness and Cognition provided evidence from a study of the rubber hand illusion (RHI) that supports the multisensory principle of inverse effectiveness (PoIE). I describe two methods of assessing the principle of inverse effectiveness ('a priori' and 'post-hoc'), and discuss how the post-hoc method is affected by the statistical artefact of,regression towards the mean'. I identify several cases where this artefact may have affected particular conclusions about the PoIE, and relate these to the historical origins of 'regression towards the mean'. Although the conclusions of the recent report may not have been grossly affected, some of the inferential statistics were almost certainly biased by the methods used. I conclude that, unless such artefacts are fully dealt with in the future, and unless the statistical methods for assessing the PoIE evolve, strong evidence in support of the PoIE will remain lacking. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eddy-covariance measurements of carbon dioxide fluxes were taken semi-continuously between October 2006 and May 2008 at 190 m height in central London (UK) to quantify emissions and study their controls. Inner London, with a population of 8.2 million (~5000 inhabitants per km2) is heavily built up with 8% vegetation cover within the central boroughs. CO2 emissions were found to be mainly controlled by fossil fuel combustion (e.g. traffic, commercial and domestic heating). The measurement period allowed investigation of both diurnal patterns and seasonal trends. Diurnal averages of CO2 fluxes were found to be highly correlated to traffic. However changes in heating-related natural gas consumption and, to a lesser extent, photosynthetic activity that controlled the seasonal variability. Despite measurements being taken at ca. 22 times the mean building height, coupling with street level was adequate, especially during daytime. Night-time saw a higher occurrence of stable or neutral stratification, especially in autumn and winter, which resulted in data loss in post-processing. No significant difference was found between the annual estimate of net exchange of CO2 for the expected measurement footprint and the values derived from the National Atmospheric Emissions Inventory (NAEI), with daytime fluxes differing by only 3%. This agreement with NAEI data also supported the use of the simple flux footprint model which was applied to the London site; this also suggests that individual roughness elements did not significantly affect the measurements due to the large ratio of measurement height to mean building height.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective. Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the in vivo assessment of functional activity of the cerebral cortex as well as in the field of brain–computer interface (BCI) research. A common challenge for the utilization of fNIRS in these areas is a stable and reliable investigation of the spatio-temporal hemodynamic patterns. However, the recorded patterns may be influenced and superimposed by signals generated from physiological processes, resulting in an inaccurate estimation of the cortical activity. Up to now only a few studies have investigated these influences, and still less has been attempted to remove/reduce these influences. The present study aims to gain insights into the reduction of physiological rhythms in hemodynamic signals (oxygenated hemoglobin (oxy-Hb), deoxygenated hemoglobin (deoxy-Hb)). Approach. We introduce the use of three different signal processing approaches (spatial filtering, a common average reference (CAR) method; independent component analysis (ICA); and transfer function (TF) models) to reduce the influence of respiratory and blood pressure (BP) rhythms on the hemodynamic responses. Main results. All approaches produce large reductions in BP and respiration influences on the oxy-Hb signals and, therefore, improve the contrast-to-noise ratio (CNR). In contrast, for deoxy-Hb signals CAR and ICA did not improve the CNR. However, for the TF approach, a CNR-improvement in deoxy-Hb can also be found. Significance. The present study investigates the application of different signal processing approaches to reduce the influences of physiological rhythms on the hemodynamic responses. In addition to the identification of the best signal processing method, we also show the importance of noise reduction in fNIRS data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a quantitative evaluation of a tracking system on PETS 2015 Challenge datasets using well-established performance measures. Using the existing tools, the tracking system implements an end-to-end pipeline that include object detection, tracking and post- processing stages. The evaluation results are presented on the provided sequences of both ARENA and P5 datasets of PETS 2015 Challenge. The results show an encouraging performance of the tracker in terms of accuracy but a greater tendency of being prone to cardinality error and ID changes on both datasets. Moreover, the analysis show a better performance of the tracker on visible imagery than on thermal imagery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.