67 resultados para underwater data muling
Resumo:
The nitrogen dioxide is a primary pollutant, regarded for the estimation of the air quality index, whose excessive presence may cause significant environmental and health problems. In the current work, we suggest characterizing the evolution of NO2 levels, by using geostatisti- cal approaches that deal with both the space and time coordinates. To develop our proposal, a first exploratory analysis was carried out on daily values of the target variable, daily measured in Portugal from 2004 to 2012, which led to identify three influential covariates (type of site, environment and month of measurement). In a second step, appropriate geostatistical tools were applied to model the trend and the space-time variability, thus enabling us to use the kriging techniques for prediction, without requiring data from a dense monitoring network. This method- ology has valuable applications, as it can provide accurate assessment of the nitrogen dioxide concentrations at sites where either data have been lost or there is no monitoring station nearby.
Resumo:
Biofilm research is growing more diverse and dependent on high-throughput technologies and the large-scale production of results aggravates data substantiation. In particular, it is often the case that experimental protocols are adapted to meet the needs of a particular laboratory and no statistical validation of the modified method is provided. This paper discusses the impact of intra-laboratory adaptation and non-rigorous documentation of experimental protocols on biofilm data interchange and validation. The case study is a non-standard, but widely used, workflow for Pseudomonas aeruginosa biofilm development, considering three analysis assays: the crystal violet (CV) assay for biomass quantification, the XTT assay for respiratory activity assessment, and the colony forming units (CFU) assay for determination of cell viability. The ruggedness of the protocol was assessed by introducing small changes in the biofilm growth conditions, which simulate minor protocol adaptations and non-rigorous protocol documentation. Results show that even minor variations in the biofilm growth conditions may affect the results considerably, and that the biofilm analysis assays lack repeatability. Intra-laboratory validation of non-standard protocols is found critical to ensure data quality and enable the comparison of results within and among laboratories.
Resumo:
For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.
Discussing Chevalier’s data on the efficiency of tariffs for american and french canals in the 1830s
Resumo:
This article revisits Michel Chevalier’s work and discussions of tariffs. Chevalier shifted from Saint-Simonism to economic liberalism during his life in the 19th century. His influence was soon perceived in the political world and economic debates, mainly because of his discussion of tariffs as instruments of efficient transport policies. This work discusses Chevalier’s thoughts on tariffs by revisiting his masterpiece, Le Cours d’Économie Politique. Data Envelopment Analysis (DEA) was conducted to test Chevalier’s hypothesis on the inefficiency of French tariffs. This work showed that Chevalier’s claims on French tariffs are not validated by DEA.
Resumo:
Dissertação de mestrado em Systems Engineering
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica
Resumo:
OpenAIRE supports the European Commission Open Access policy by providing an infrastructure for researchers to comply with the European Union Open Access mandate. The current OpenAIRE infrastructure and services, resulting from OpenAIRE and OpenAIREplus FP7 projects, builds on Open Access research results from a wide range of repositories and other data sources: institutional or thematic publication repositories, Open Access journals, data repositories, Current Research Information Systems and aggregators. (...)
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
This data article is referred to the research article entitled The role of ascorbate peroxidase, guaiacol peroxidase, and polysaccharides in cassava (Manihot esculenta Crantz) roots under postharvest physiological deterioration by Uarrota et al. (2015). Food Chemistry 197, Part A, 737746. The stress duo to PPD of cassava roots leads to the formation of ROS which are extremely harmful and accelerates cassava spoiling. To prevent or alleviate injuries from ROS, plants have evolved antioxidant systems that include non-enzymatic and enzymatic defence systems such as ascorbate peroxidase, guaiacol peroxidase and polysaccharides. In this data article can be found a dataset called newdata, in RData format, with 60 observations and 06 variables. The first 02 variables (Samples and Cultivars) and the last 04, spectrophotometric data of ascorbate peroxidase, guaiacol peroxidase, tocopherol, total proteins and arcsined data of cassava PPD scoring. For further interpretation and analysis in R software, a report is also provided. Means of all variables and standard deviations are also provided in the Supplementary tables (data.long3.RData, data.long4.RData and meansEnzymes.RData), raw data of PPD scoring without transformation (PPDmeans.RData) and days of storage (days.RData) are also provided for data analysis reproducibility in R software.
Resumo:
The research aimed to establish tyre-road noise models by using a Data Mining approach that allowed to build a predictive model and assess the importance of the tested input variables. The data modelling took into account three learning algorithms and three metrics to define the best predictive model. The variables tested included basic properties of pavement surfaces, macrotexture, megatexture, and uneven- ness and, for the first time, damping. Also, the importance of those variables was measured by using a sensitivity analysis procedure. Two types of models were set: one with basic variables and another with complex variables, such as megatexture and damping, all as a function of vehicles speed. More detailed models were additionally set by the speed level. As a result, several models with very good tyre-road noise predictive capacity were achieved. The most relevant variables were Speed, Temperature, Aggregate size, Mean Profile Depth, and Damping, which had the highest importance, even though influenced by speed. Megatexture and IRI had the lowest importance. The applicability of the models developed in this work is relevant for trucks tyre-noise prediction, represented by the AVON V4 test tyre, at the early stage of road pavements use. Therefore, the obtained models are highly useful for the design of pavements and for noise prediction by road authorities and contractors.
Resumo:
Currently, the quality of the Indonesian national road network is inadequate due to several constraints, including overcapacity and overloaded trucks. The high deterioration rate of the road infrastructure in developing countries along with major budgetary restrictions and high growth in traffic have led to an emerging need for improving the performance of the highway maintenance system. However, the high number of intervening factors and their complex effects require advanced tools to successfully solve this problem. The high learning capabilities of Data Mining (DM) are a powerful solution to this problem. In the past, these tools have been successfully applied to solve complex and multi-dimensional problems in various scientific fields. Therefore, it is expected that DM can be used to analyze the large amount of data regarding the pavement and traffic, identify the relationship between variables, and provide information regarding the prediction of the data. In this paper, we present a new approach to predict the International Roughness Index (IRI) of pavement based on DM techniques. DM was used to analyze the initial IRI data, including age, Equivalent Single Axle Load (ESAL), crack, potholes, rutting, and long cracks. This model was developed and verified using data from an Integrated Indonesia Road Management System (IIRMS) that was measured with the National Association of Australian State Road Authorities (NAASRA) roughness meter. The results of the proposed approach are compared with the IIRMS analytical model adapted to the IRI, and the advantages of the new approach are highlighted. We show that the novel data-driven model is able to learn (with high accuracy) the complex relationships between the IRI and the contributing factors of overloaded trucks
Resumo:
It was found that the non-perturbative corrections calculated using Pythia with the Perugia 2011 tune did not include the effect of the underlying event. The affected correction factors were recomputed using the Pythia 6.427 generator. These corrections are applied as baseline to the NLO pQCD calculations and thus the central values of the theoretical predictions have changed by a few percent with the new corrections. This has a minor impact on the agreement between the data and the theoretical predictions. Figures 2 and 6 to 13, and all the tables have been updated with the new values. A few sentences in the discussion in sections 5.2 and 9 were altered or removed.
Resumo:
This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z→ττ decays. In Z→μμ events selected from proton-proton collision data recorded at s√=8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by τ leptons from simulated Z→ττ decays at the level of reconstructed tracks and calorimeter cells. The τ lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and τ leptons as well as the detector response to the τ decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called τ-embedding method is particularly relevant for Higgs boson searches and analyses in ττ final states, where Z→ττ decays constitute a large irreducible background that cannot be obtained directly from data control samples.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação