15 resultados para Test data generation

em Universidade do Minho


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Doctoral Thesis Civil Engineering

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A newly developed strain rate dependent anisotropic continuum model is proposed for impact and blast applications in masonry. The present model adopted the usual approach of considering different yield criteria in tension and compression. The analysis of unreinforced block work masonry walls subjected to impact is carried out to validate the capability of the model. Comparison of the numerical predictions and test data revealed good agreement. Next, a parametric study is conducted to evaluate the influence of the tensile strengths along the three orthogonal directions and of the wall thickness on the global behavior of masonry walls.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present study proposes a dynamic constitutive material interface model that includes non-associated flow rule and high strain rate effects, implemented in the finite element code ABAQUS as a user subroutine. First, the model capability is validated with numerical simulations of unreinforced block work masonry walls subjected to low velocity impact. The results obtained are compared with field test data and good agreement is found. Subsequently, a comprehensive parametric analysis is accomplished with different joint tensile strengths and cohesion, and wall thickness to evaluate the effect of the parameter variations on the impact response of masonry walls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology based on the Bayesian data fusion techniques applied to non-destructive and destructive tests for the structural assessment of historical constructions. The aim of the methodology is to reduce the uncertainties of the parameter estimation. The Young's modulus of granite stones was chosen as an example for the present paper. The methodology considers several levels of uncertainty since the parameters of interest are considered random variables with random moments. A new concept of Trust Factor was introduced to affect the uncertainty related to each test results, translated by their standard deviation, depending on the higher or lower reliability of each test to predict a certain parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ETL conceptual modeling is a very important activity in any data warehousing system project implementation. Owning a high-level system representation allowing for a clear identification of the main parts of a data warehousing system is clearly a great advantage, especially in early stages of design and development. However, the effort to model conceptually an ETL system rarely is properly rewarded. Translating ETL conceptual models directly into something that saves work and time on the concrete implementation of the system process it would be, in fact, a great help. In this paper we present and discuss a hybrid approach to this problem, combining the simplicity of interpretation and power of expression of BPMN on ETL systems conceptualization with the use of ETL patterns to produce automatically an ETL skeleton, a first prototype system, which has the ability to be executed in a commercial ETL tool like Kettle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article revisits Michel Chevalier’s work and discussions of tariffs. Chevalier shifted from Saint-Simonism to economic liberalism during his life in the 19th century. His influence was soon perceived in the political world and economic debates, mainly because of his discussion of tariffs as instruments of efficient transport policies. This work discusses Chevalier’s thoughts on tariffs by revisiting his masterpiece, Le Cours d’Économie Politique. Data Envelopment Analysis (DEA) was conducted to test Chevalier’s hypothesis on the inefficiency of French tariffs. This work showed that Chevalier’s claims on French tariffs are not validated by DEA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research aimed to establish tyre-road noise models by using a Data Mining approach that allowed to build a predictive model and assess the importance of the tested input variables. The data modelling took into account three learning algorithms and three metrics to define the best predictive model. The variables tested included basic properties of pavement surfaces, macrotexture, megatexture, and uneven- ness and, for the first time, damping. Also, the importance of those variables was measured by using a sensitivity analysis procedure. Two types of models were set: one with basic variables and another with complex variables, such as megatexture and damping, all as a function of vehicles speed. More detailed models were additionally set by the speed level. As a result, several models with very good tyre-road noise predictive capacity were achieved. The most relevant variables were Speed, Temperature, Aggregate size, Mean Profile Depth, and Damping, which had the highest importance, even though influenced by speed. Megatexture and IRI had the lowest importance. The applicability of the models developed in this work is relevant for trucks tyre-noise prediction, represented by the AVON V4 test tyre, at the early stage of road pavements use. Therefore, the obtained models are highly useful for the design of pavements and for noise prediction by road authorities and contractors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews and extends searches for the direct pair production of the scalar supersymmetric partners of the top and bottom quarks in proton--proton collisions collected by the ATLAS collaboration during the LHC Run 1. Most of the analyses use 20 fb−1 of collisions at a centre-of-mass energy of s√=8 TeV, although in some case an additional 4.7 fb−1 of collision data at s√=7 TeV are used. New analyses are introduced to improve the sensitivity to specific regions of the model parameter space. Since no evidence of third-generation squarks is found, exclusion limits are derived by combining several analyses and are presented in both a simplified model framework, assuming simple decay chains, as well as within the context of more elaborate phenomenological supersymmetric models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Body and brain undergo several changes with aging. One of the domains in which these changes are more remarkable relates with cognitive performance. In the present work, electroencephalogram (EEG) markers (power spectral density and spectral coherence) of age-related cognitive decline were sought whilst the subjects performed the Wisconsin Card Sorting Test (WCST). Considering the expected age-related cognitive deficits, WCST was applied to young, mid-age and elderly participants, and the theta and alpha frequency bands were analyzed. From the results herein presented, higher theta and alpha power were found to be associated with a good performance in the WCST of younger subjects. Additionally, higher theta and alpha coherence were also associated with good performance and were shown to decline with age and a decrease in alpha peak frequency seems to be associated with aging. Additionally, inter-hemispheric long-range coherences and parietal theta power were identified as age-independent EEG correlates of cognitive performance. In summary, these data reveals age-dependent as well as age-independent EEG correlates of cognitive performance that contribute to the understanding of brain aging and related cognitive deficits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Series Title: IFIP - The International Federation for Information Processing, ISSN 1868-4238"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Publicado em "Information control in manufacturing 1998 : (INCOM'98) : advances in industrial engineering : a proceedings volume from the 9th IFAC Symposium, Nancy-Metz, France, 24-26 June 1998. Vol. 2"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Article first published online: 13 NOV 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current data mining engines are difficult to use, requiring optimizations by data mining experts in order to provide optimal results. To solve this problem a new concept was devised, by maintaining the functionality of current data mining tools and adding pervasive characteristics such as invisibility and ubiquity which focus on their users, providing better ease of use and usefulness, by providing autonomous and intelligent data mining processes. This article introduces an architecture to implement a data mining engine, composed by four major components: database; Middleware (control); Middleware (processing); and interface. These components are interlinked but provide independent scaling, allowing for a system that adapts to the user’s needs. A prototype has been developed in order to test the architecture. The results are very promising and showed their functionality and the need for further improvements.