17 resultados para post-processing

em CentAUR: Central Archive University of Reading - UK


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Baby leaf salads are gaining in popularity over traditional whole head lettuce salads in response to consumer demand for greater variety and convenience in their diet. Baby lettuce leaves are mixed, washed and packaged as whole leaves, with a shelf-life of approximately 10 days post-processing. End of shelf-life, as determined by the consumer, is typified by bruising, water-logging and blackening of the leaves, but the biological events causing this phenotype have not been studied to date. We investigated the physiological and ultrastructural characteristics during postharvest shelf-life of two lettuce varieties with very different leaf morphologies. Membrane disruption was an important determinant of cell death in both varieties. although the timing and characteristics of breakdown was different in each with Lollo rossa showing signs of aging such as thylakoid disruption and plastoglobuli accumulation earlier than Cos. Membranes in Lollo rossa showed a later, but more distinct increase in permeability than in Cos. as indicated by electrolyte leakage and the presence of cytoplasmic fragments in the vacuole, but Cos membranes show distinct fractures towards the end of shelf-life. The tissue lost less than 25% fresh weight during shelf-life and there was little protein loss compared to developmentally aging leaves in an ambient environment. Biophysical measurements showed that breakstrength was significantly reduced in Lollo rossa, whereas irreversible leaf plasticity was significantly reduced in Cos leaves. The reversible elastic properties of both varieties changed throughout shelf-life. We compared the characteristics of shelf-life in both varieties of bagged lettuce leaves with other leafy salad crops and discuss the potential targets for future work to improve postharvest quality of baby leaf lettuce. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eddy-covariance measurements of carbon dioxide fluxes were taken semi-continuously between October 2006 and May 2008 at 190 m height in central London (UK) to quantify emissions and study their controls. Inner London, with a population of 8.2 million (~5000 inhabitants per km2) is heavily built up with 8% vegetation cover within the central boroughs. CO2 emissions were found to be mainly controlled by fossil fuel combustion (e.g. traffic, commercial and domestic heating). The measurement period allowed investigation of both diurnal patterns and seasonal trends. Diurnal averages of CO2 fluxes were found to be highly correlated to traffic. However changes in heating-related natural gas consumption and, to a lesser extent, photosynthetic activity that controlled the seasonal variability. Despite measurements being taken at ca. 22 times the mean building height, coupling with street level was adequate, especially during daytime. Night-time saw a higher occurrence of stable or neutral stratification, especially in autumn and winter, which resulted in data loss in post-processing. No significant difference was found between the annual estimate of net exchange of CO2 for the expected measurement footprint and the values derived from the National Atmospheric Emissions Inventory (NAEI), with daytime fluxes differing by only 3%. This agreement with NAEI data also supported the use of the simple flux footprint model which was applied to the London site; this also suggests that individual roughness elements did not significantly affect the measurements due to the large ratio of measurement height to mean building height.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle filtering approach gives access to the probability density of the state vector, which allows us to determine the predictive power — an entropy based measure — of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of different sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from different locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the first meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a quantitative evaluation of a tracking system on PETS 2015 Challenge datasets using well-established performance measures. Using the existing tools, the tracking system implements an end-to-end pipeline that include object detection, tracking and post- processing stages. The evaluation results are presented on the provided sequences of both ARENA and P5 datasets of PETS 2015 Challenge. The results show an encouraging performance of the tracker in terms of accuracy but a greater tendency of being prone to cardinality error and ID changes on both datasets. Moreover, the analysis show a better performance of the tracker on visible imagery than on thermal imagery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is just over 30 years since the definitive identification of the adrenocorticotrophin (ACTH) precursor, pro-opiomelanocotin (POMC). Although first characterised in the anterior and intermediate lobes of the pituitary, POMC is also expressed in a number of both central and peripheral tissues including the skin, central nervous tissue and placenta. Following synthesis, POMC undergoes extensive post-translational processing producing not only ACTH, but also a number of other biologically active peptides. The extent and pattern of this processing is tissue-specific, the end result being the tissue dependent production of different combinations of peptides from the same precursor. These peptides have a diverse range of biological roles ranging from pigmentation to adrenal function to the regulation of feeding. This level of complexity has resulted in POMC becoming the archetypal model for prohormone processing, illustrating how a single protein combined with post-translational modification can have a diverse number of roles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several studies have highlighted the importance of the cooling period in oil absorption in deep-fat fried products. Specifically, it has been established that the largest proportion of oil which ends up into the food, is sucked into the porous crust region after the fried product is removed from the oil bath, stressing the importance of this time interval. The main objective of this paper was to develop a predictive mechanistic model that can be used to understand the principles behind post-frying cooling oil absorption kinetics, which can also help identifying the key parameters that affect the final oil intake by the fried product. The model was developed for two different geometries, an infinite slab and an infinite cylinder, and was divided into two main sub-models, one describing the immersion frying period itself and the other describing the post-frying cooling period. The immersion frying period was described by a transient moving-front model that considered the movement of the crust/core interface, whereas post-frying cooling oil absorption was considered to be a pressure driven flow mediated by capillary forces. A key element in the model was the hypothesis that oil suction would only begin once a positive pressure driving force had developed. The mechanistic model was based on measurable physical and thermal properties, and process parameters with no need of empirical data fitting, and can be used to study oil absorption in any deep-fat fried product that satisfies the assumptions made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. This study investigated whether trait positive schizotypy or trait dissociation was associated with increased levels of data-driven processing and symptoms of post-traumatic distress following a road traffic accident. Methods. Forty-five survivors of road traffic accidents were recruited from a London Accident and Emergency service. Each completed measures of trait positive schizotypy, trait dissociation, data-driven processing, and post-traumatic stress. Results. Trait positive schizotypy was associated with increased levels of data-driven processing and post-traumatic symptoms during a road traffic accident, whereas trait dissociation was not. Conclusions. Previous results which report a significant relationship between trait dissociation and post-traumatic symptoms may be an artefact of the relationship between trait positive schizotypy and trait dissociation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Intrusions are common symptoms of both posttraumatic stress disorder (PTSD) and schizophrenia. Steel et al (2005) suggest that an information processing style characterized by weak trait contextual integration renders psychotic individuals vulnerable to intrusive experiences. This ‘contextual integration hypothesis’ was tested in individuals reporting anomalous experiences in the absence of a need-for-care. Methods: Twenty-six low schizotypes and twenty-three individuals reporting anomalous experiences were shown a traumatic film with and without a concurrent visuo-spatial task. Participants rated post-traumatic intrusions for frequency and form, and completed self-report measures of information processing style. It was predicted that, due to their weaker trait contextual integration, the anomalous experiences group would (1) exhibit more intrusions following exposure to the trauma-film; (2) display intrusions characterised by more PTSD qualities and (3) show a greater reduction of intrusions with the concurrent visuo-spatial task. Results: As predicted, the anomalous experiences group reported a lower level of trait contextual integration and more intrusions than the low schizotypes, both immediately after watching the film, and during the following seven days. Their post-traumatic intrusive memories were more PTSD-like (more intrusive, vivid and associated with emotion). The visuo-spatial task had no effect on number of intrusions in either group. Conclusions: These findings provide some support for the proposal that weak trait contextual integration underlies the development of intrusions within both PTSD and psychosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.