884 resultados para test data generation
Resumo:
Numerical predictions produced by the SMARTFIRE fire field model are compared with experimental data. The predictions consist of gas temperatures at several locations within the compartment over a 60 min period. The test fire, produced by a burning wood crib attained a maximum heat release rate of approximately 11MW. The fire is intended to represent a nonspreading fire (i.e. single fuel source) in a moderately sized ventilated room. The experimental data formed part of the CIB Round Robin test series. Two simulations are produced, one involving a relatively coarse mesh and the other with a finer mesh. While the SMARTFIRE simulations made use of a simple volumetric heat release rate model, both simulations were found capable of reproducing the overall qualitative results. Both simulations tended to overpredict the measured temperatures. However, the finer mesh simulation was better able to reproduce the qualitative features of the experimental data. The maximum recorded experimental temperature (12141C after 39 min) was over-predicted in the fine mesh simulation by 12%. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents a generic framework that can be used to describe study plans using meta-data. The context of this research and associated technologies and standards is presented. The approach adopted here has been developed within the mENU project that aims to provide a model for a European Networked University. The methodology for the design of the generic Framework is discussed and the main design requirements are presented. The approach adopted was based on a set of templates containing meta-data required for the description of programs of study and consisting of generic building elements annotated appropriately. The process followed to develop the templates is presented together with a set of evaluation criteria to test the suitability of the approach. The templates structure is presented and example templates are shown. A first evaluation of the approach has shown that the proposed framework can provide a flexible and competent means for the generic description of study plans for the purposes of a networked university.
Resumo:
This paper investigates an isothermal fatigue test for solder joints developed at the NPL. The test specimen is a lap joint between two copper arms. During the test the displacement at the ends of the copper are controlled and the force measured. The modeling results in the paper show that the displacement across the solder joint is not equal to the displacement applied at the end of the specimen. This is due to deformation within the copper arms. A method is described to compensate for this difference. The strain distribution in the solder was determined by finite element analysis and compared to the distribution generated by a theoretical 'ideal' test which generates an almost pure shear mode in the solder. By using a damage-based constitutive law the shape of the crack generated in the specimen has been predicted for both the actual test and the ideal pure shear test. Results from the simulations are also compared with experimental data using SnAgCu solder.
Resumo:
The Guardian newspaper (21st October 2005) informed its readers that: "Stanford University in California is to make its course content available on iTunes...The service, Stanford on iTunes, will provide…downloads of faculty lectures, campus events, performances, book readings, music recorded by Stanford students and even podcasts of Stanford football games". The emergence of Podcasting as means of sending audio data to users has clearly excited educational technologists around the world. This paper will explore the technologies behind Podcasting and how this could be used to develop and deliver new E-Learning material. The paper refers to the work done to create Podcasts of lectures for University of Greenwich students.
Resumo:
Analysis of the generic attacks and countermeasures for block cipher based message authentication code algorithms (MAC) in sensor applications is undertaken; the conclusions are used in the design of two new MAC constructs Quicker Block Chaining MAC1 (QBC-MAC1) and Quicker Block Chaining MAC2 (QBC-MAC2). Using software simulation we show that our new constructs point to improvements in usage of CPU instruction clock cycle and energy requirement when benchmarked against the de facto Cipher Block Chaining MAC (CBC-MAC) based construct used in the TinySec security protocol for wireless sensor networks.
Resumo:
Görzig, H., Engel, F., Brocks, H., Vogel, T. & Hemmje, M. (2015, August). Towards Data Management Planning Support for Research Data. Paper presented at the ASE International Conference on Data Science, Stanford, United States of America.
Resumo:
Seasonal changes in altimeter data are derived for the North Atlantic Ocean. Altimeter data are then used to examine annually propagating structure along 26 degree N. By averaging the altimeter data into monthly values or by Fourier analysis, a positive anomaly can be followed from 17 degree W to similar to 50 degree W along similar to 26 degree N. The methods give a westward travel speed of 1 degree of longitude a month and a half-life of one year for the average decaying structure. At similar to 50 degree W 26 degree N, the average structure is about 2.8 years old with an elevation signal of similar to 1 cm, having gravelled similar to 3300 km westward. The mean positive anomaly results from the formation of anticyclonic eddies which are generally formed annually south of the Canary Islands by late summer and which then travel westward near 26 degree N. Individual eddy structure along 26 degree N is examined and related to in situ measurements and anomalies in the annual seasonal concentration cycle of SeaWiFS chlorophyll-a.
Resumo:
Noise is one of the main factors degrading the quality of original multichannel remote sensing data and its presence influences classification efficiency, object detection, etc. Thus, pre-filtering is often used to remove noise and improve the solving of final tasks of multichannel remote sensing. Recent studies indicate that a classical model of additive noise is not adequate enough for images formed by modern multichannel sensors operating in visible and infrared bands. However, this fact is often ignored by researchers designing noise removal methods and algorithms. Because of this, we focus on the classification of multichannel remote sensing images in the case of signal-dependent noise present in component images. Three approaches to filtering of multichannel images for the considered noise model are analysed, all based on discrete cosine transform in blocks. The study is carried out not only in terms of conventional efficiency metrics used in filtering (MSE) but also in terms of multichannel data classification accuracy (probability of correct classification, confusion matrix). The proposed classification system combines the pre-processing stage where a DCT-based filter processes the blocks of the multichannel remote sensing image and the classification stage. Two modern classifiers are employed, radial basis function neural network and support vector machines. Simulations are carried out for three-channel image of Landsat TM sensor. Different cases of learning are considered: using noise-free samples of the test multichannel image, the noisy multichannel image and the pre-filtered one. It is shown that the use of the pre-filtered image for training produces better classification in comparison to the case of learning for the noisy image. It is demonstrated that the best results for both groups of quantitative criteria are provided if a proposed 3D discrete cosine transform filter equipped by variance stabilizing transform is applied. The classification results obtained for data pre-filtered in different ways are in agreement for both considered classifiers. Comparison of classifier performance is carried out as well. The radial basis neural network classifier is less sensitive to noise in original images, but after pre-filtering the performance of both classifiers is approximately the same.
Resumo:
The phytoplankton colour index (PCI) of the Continuous Plankton Recorder (CPR) survey is an in situ measure of ocean colour, which is considered a proxy of the phytoplankton biomass. PCI has been extensively used to describe the major spatiotemporal patterns of phytoplankton in the North Atlantic Ocean and North Sea since 1931. Regardless of its wide application, the lack of an adequate evaluation to test the PCI's quantitative nature is an important limitation. To address this concern, a field trial over the main production season has been undertaken to assess the numerical values assigned by previous investigations for each category of the greenness of the PCI. CPRs were towed across the English Channel from Roscoff to Plymouth consecutively for each of 8 months producing 76 standard CPR samples, each representing 10 nautical miles of tow. The results of this experiment test and update the PCI methodology, and confirm the validity of this long-term in situ ocean colour data set. In addition, using a 60-year time series of the PCI of the western English Channel, a comparison is made between the previous and the current revised experimental calculations of PCI.
Resumo:
1. A first step in the analysis of complex movement data often involves discretisation of the path into a series of step-lengths and turns, for example in the analysis of specialised random walks, such as Lévy flights. However, the identification of turning points, and therefore step-lengths, in a tortuous path is dependent on ad-hoc parameter choices. Consequently, studies testing for movement patterns in these data, such as Lévy flights, have generated debate. However, studies focusing on one-dimensional (1D) data, as in the vertical displacements of marine pelagic predators, where turning points can be identified unambiguously have provided strong support for Lévy flight movement patterns. 2. Here, we investigate how step-length distributions in 3D movement patterns would be interpreted by tags recording in 1D (i.e. depth) and demonstrate the dimensional symmetry previously shown mathematically for Lévy-flight movements. We test the veracity of this symmetry by simulating several measurement errors common in empirical datasets and find Lévy patterns and exponents to be robust to low-quality movement data. 3. We then consider exponential and composite Brownian random walks and show that these also project into 1D with sufficient symmetry to be clearly identifiable as such. 4. By extending the symmetry paradigm, we propose a new methodology for step-length identification in 2D or 3D movement data. The methodology is successfully demonstrated in a re-analysis of wandering albatross Global Positioning System (GPS) location data previously analysed using a complex methodology to determine bird-landing locations as turning points in a Lévy walk. For this high-resolution GPS data, we show that there is strong evidence for albatross foraging patterns approximated by truncated Lévy flights spanning over 3·5 orders of magnitude. 5. Our simple methodology and freely available software can be used with any 2D or 3D movement data at any scale or resolution and are robust to common empirical measurement errors. The method should find wide applicability in the field of movement ecology spanning the study of motile cells to humans.
Resumo:
Shade plots, simple visual representations of abundance matrices from multivariate species assemblage studies, are shown to be an effective aid in choosing an overall transformation (or other pre-treatment) of quantitative data for long-term use, striking an appropriate balance between dominant and less abundant taxa in ensuing resemblance-based multivariate analyses. Though the exposition is entirely general and applicable to all community studies, detailed illustrations of the comparative power and interpretative possibilities of shade plots are given in the case of two estuarine assemblage studies in south-western Australia: (a) macrobenthos in the upper Swan Estuary over a two-year period covering a highly significant precipitation event for the Perth area; and (b) a wide-scale spatial study of the nearshore fish fauna from five divergent estuaries. The utility of transformations of intermediate severity is again demonstrated and, with greater novelty, the potential importance seen of further mild transformation of all data after differential down-weighting (dispersion weighting) of spatially clumped' or schooled' species. Among the new techniques utilized is a two-way form of the RELATE test, which demonstrates linking of assemblage structure (fish) to continuous environmental variables (water quality), having removed a categorical factor (estuary differences). Re-orderings of sample and species axes in the associated shade plots are seen to provide transparent explanations at the species level for such continuous multivariate patterns.
Resumo:
Zooplankton play an important role in our oceans, in biogeochemical cycling and providing a food source for commercially important fish larvae. However, difficulties in correctly identifying zooplankton hinder our understanding of their roles in marine ecosystem functioning, and can prevent detection of long term changes in their community structure. The advent of massively parallel next generation sequencing technology allows DNA sequence data to be recovered directly from whole community samples. Here we assess the ability of such sequencing to quantify richness and diversity of a mixed zooplankton assemblage from a productive time series site in the Western English Channel. Methodology/Principle Findings Plankton net hauls (200 µm) were taken at the Western Channel Observatory station L4 in September 2010 and January 2011. These samples were analysed by microscopy and metagenetic analysis of the 18S nuclear small subunit ribosomal RNA gene using the 454 pyrosequencing platform. Following quality control a total of 419,041 sequences were obtained for all samples. The sequences clustered into 205 operational taxonomic units using a 97% similarity cut-off. Allocation of taxonomy by comparison with the National Centre for Biotechnology Information database identified 135 OTUs to species level, 11 to genus level and 1 to order, <2.5% of sequences were classified as unknowns. By comparison a skilled microscopic analyst was able to routinely enumerate only 58 taxonomic groups. Conclusions Metagenetics reveals a previously hidden taxonomic richness, especially for Copepoda and hard-to-identify meroplankton such as Bivalvia, Gastropoda and Polychaeta. It also reveals rare species and parasites. We conclude that Next Generation Sequencing of 18S amplicons is a powerful tool for elucidating the true diversity and species richness of zooplankton communities. While this approach allows for broad diversity assessments of plankton it may become increasingly attractive in future if sequence reference libraries of accurately identified individuals are better populated.
Resumo:
In this paper NOx emissions modelling for real-time operation and control of a 200 MWe coal-fired power generation plant is studied. Three model types are compared. For the first model the fundamentals governing the NOx formation mechanisms and a system identification technique are used to develop a grey-box model. Then a linear AutoRegressive model with eXogenous inputs (ARX) model and a non-linear ARX model (NARX) are built. Operation plant data is used for modelling and validation. Model cross-validation tests show that the developed grey-box model is able to consistently produce better overall long-term prediction performance than the other two models.
Resumo:
AIMS/HYPOTHESIS: This study examined the biological effects of the GIP receptor antagonist, (Pro3)GIP and the GLP-1 receptor antagonist, exendin(9-39)amide.
METHODS: Cyclic AMP production was assessed in Chinese hamster lung fibroblasts transfected with human GIP or GLP-1 receptors, respectively. In vitro insulin release studies were assessed in BRIN-BD11 cells while in vivo insulinotropic and glycaemic responses were measured in obese diabetic ( ob/ ob) mice.
RESULTS: In GIP receptor-transfected fibroblasts, (Pro(3))GIP or exendin(9-39)amide inhibited GIP-stimulated cyclic AMP production with maximal inhibition of 70.0+/-3.5% and 73.5+/-3.2% at 10(-6) mol/l, respectively. In GLP-1 receptor-transfected fibroblasts, exendin(9-39)amide inhibited GLP-1-stimulated cyclic AMP production with maximal inhibition of 60+/-0.7% at 10(-6) mol/l, whereas (Pro(3))GIP had no effect. (Pro(3))GIP specifically inhibited GIP-stimulated insulin release (86%; p<0.001) from clonal BRIN-BD11 cells, but had no effect on GLP-1-stimulated insulin release. In contrast, exendin(9-39)amide inhibited both GIP and GLP-1-stimulated insulin release (57% and 44%, respectively; p<0.001). Administration of (Pro(3))GIP, exendin(9-39)amide or a combination of both peptides (25 nmol/kg body weight, i.p.) to fasted (ob/ob) mice decreased the plasma insulin responses by 42%, 54% and 49%, respectively (p<0.01 to p<0.001). The hyperinsulinaemia of non-fasted (ob/ob) mice was decreased by 19%, 27% and 18% (p<0.05 to p<0.01) by injection of (Pro3)GIP, exendin(9-39)amide or combined peptides but accompanying changes of plasma glucose were small.
CONCLUSIONS/INTERPRETATION: These data show that (Pro(3))GIP is a specific GIP receptor antagonist. Furthermore, feeding studies in one commonly used animal model of obesity and diabetes, (ob/ob) mice, suggest that GIP is the major physiological component of the enteroinsular axis, contributing approximately 80% to incretin-induced insulin release.
Resumo:
The ingress of chlorides into concrete is predominantly by the mechanism of diffusion and the resistance of concrete to the transport of chlorides is generally represented by its coefficient of diffusion. The determination of this coefficient normally requires long test duration (many months). Therefore, rapid test methods based on the electrical migration of ions have widely been used. The current procedure of chloride ion migration tests involves placing a concrete disc between an ion source solution and a neutral solution and accelerating the transport of ions from the source solution to the neutral solution by the application of a potential difference across the concrete disc. This means that, in order to determine the chloride transport resistance of concrete cover, cores should be extracted from the structure and tested in laboratories. In an attempt to facilitate testing of the concrete cover on site, an in situ ion migration test (hereafter referred to as PERMIT ion migration test for the unique identification of the new test) was developed. The PERMIT ion migration test was validated in the lab by carrying out a comparative investigation and correlating the results with the migration coefficient from the one-dimensional chloride migration test, the effective diffusion coefficient from the normal diffusion test and the apparent diffusion coefficient determined from chloride profiles. A range of concrete mixes made with ordinary Portland cement was used for this purpose. In addition, the effects of preferential flow of ions close to the concrete surface and the proximity of reinforcement within the test area on the in situ migration coefficients were investigated. It was observed that the in situ migration index, found in one working day, correlated well with the chloride diffusion coefficients from other tests. The quality of the surface layer of the cover concrete and the location of the reinforcement within the test area were found to affect the flow of ions through the concrete during the test. Based on the data, a procedure to carry out the PERMIT ion migration test was standardised.