41 resultados para Cross Validation
em Universidade do Minho
Resumo:
This paper aims at developing a collision prediction model for three-leg junctions located in national roads (NR) in Northern Portugal. The focus is to identify factors that contribute for collision type crashes in those locations, mainly factors related to road geometric consistency, since literature is scarce on those, and to research the impact of three modeling methods: generalized estimating equations, random-effects negative binomial models and random-parameters negative binomial models, on the factors of those models. The database used included data published between 2008 and 2010 of 177 three-leg junctions. It was split in three groups of contributing factors which were tested sequentially for each of the adopted models: at first only traffic, then, traffic and the geometric characteristics of the junctions within their area of influence; and, lastly, factors which show the difference between the geometric characteristics of the segments boarding the junctionsâ area of influence and the segment included in that area were added. The choice of the best modeling technique was supported by the result of a cross validation made to ascertain the best model for the three sets of researched contributing factors. The models fitted with random-parameters negative binomial models had the best performance in the process. In the best models obtained for every modeling technique, the characteristics of the road environment, including proxy measures for the geometric consistency, along with traffic volume, contribute significantly to the number of collisions. Both the variables concerning junctions and the various national highway segments in their area of influence, as well as variations from those characteristics concerning roadway segments which border the already mentioned area of influence have proven their relevance and, therefore, there is a rightful need to incorporate the effect of geometric consistency in the three-leg junctions safety studies.
Resumo:
Nowadays the main honey producing countries require accurate labeling of honey before commercialization, including floral classification. Traditionally, this classification is made by melissopalynology analysis, an accurate but time-consuming task requiring laborious sample pre-treatment and high-skilled technicians. In this work the potential use of a potentiometric electronic tongue for pollinic assessment is evaluated, using monofloral and polyfloral honeys. The results showed that after splitting honeys according to color (white, amber and dark), the novel methodology enabled quantifying the relative percentage of the main pollens (Castanea sp., Echium sp., Erica sp., Eucaliptus sp., Lavandula sp., Prunus sp., Rubus sp. and Trifolium sp.). Multiple linear regression models were established for each type of pollen, based on the best sensors sub-sets selected using the simulated annealing algorithm. To minimize the overfitting risk, a repeated K-fold cross-validation procedure was implemented, ensuring that at least 10-20% of the honeys were used for internal validation. With this approach, a minimum average determination coefficient of 0.91 ± 0.15 was obtained. Also, the proposed technique enabled the correct classification of 92% and 100% of monofloral and polyfloral honeys, respectively. The quite satisfactory performance of the novel procedure for quantifying the relative pollen frequency may envisage its applicability for honey labeling and geographical origin identification. Nevertheless, this approach is not a full alternative to the traditional melissopalynologic analysis; it may be seen as a practical complementary tool for preliminary honey floral classification, leaving only problematic cases for pollinic evaluation.
Resumo:
Olive oils may be commercialized as intense, medium or light, according to the intensity perception of fruitiness, bitterness and pungency attributes, assessed by a sensory panel. In this work, the capability of an electronic tongue to correctly classify olive oils according to the sensory intensity perception levels was evaluated. Cross-sensitivity and non-specific lipid polymeric membranes were used as sensors. The sensor device was firstly tested using quinine monohydrochloride standard solutions. Mean sensitivities of 14±2 to 25±6 mV/decade, depending on the type of plasticizer used in the lipid membranes, were obtained showing the device capability for evaluating bitterness. Then, linear discriminant models based on sub-sets of sensors, selected by a meta-heuristic simulated annealing algorithm, were established enabling to correctly classify 91% of olive oils according to their intensity sensory grade (leave-one-out cross-validation procedure). This capability was further evaluated using a repeated K-fold cross-validation procedure, showing that the electronic tongue allowed an average correct classification of 80% of the olive oils used for internal-validation. So, the electronic tongue can be seen as a taste sensor, allowing differentiating olive oils with different sensory intensities, and could be used as a preliminary, complementary and practical tool for panelists during olive oil sensory analysis.
Resumo:
Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
Natural mineral waters (still), effervescent natural mineral waters (sparkling) and aromatized waters with fruit-flavors (still or sparkling) are an emerging market. In this work, the capability of a potentiometric electronic tongue, comprised with lipid polymeric membranes, to quantitatively estimate routinely quality physicochemical parameters (pH and conductivity) as well as to qualitatively classify water samples according to the type of water was evaluated. The study showed that a linear discriminant model, based on 21 sensors selected by the simulated annealing algorithm, could correctly classify 100 % of the water samples (leave-one out cross-validation). This potential was further demonstrated by applying a repeated K-fold cross-validation (guaranteeing that at least 15 % of independent samples were only used for internal-validation) for which 96 % of correct classifications were attained. The satisfactory recognition performance of the E-tongue could be attributed to the pH, conductivity, sugars and organic acids contents of the studied waters, which turned out in significant differences of sweetness perception indexes and total acid flavor. Moreover, the E-tongue combined with multivariate linear regression models, based on sub-sets of sensors selected by the simulated annealing algorithm, could accurately estimate waters pH (25 sensors: R 2 equal to 0.99 and 0.97 for leave-one-out or repeated K-folds cross-validation) and conductivity (23 sensors: R 2 equal to 0.997 and 0.99 for leave-one-out or repeated K-folds cross-validation). So, the overall satisfactory results achieved, allow envisaging a potential future application of electronic tongue devices for bottled water analysis and classification.
Resumo:
OBJECTIVES: To describe the process of translation and linguistic and cultural validation of the Evidence Based Practice Questionnaire for the Portuguese context: Questionário de Eficácia Clínica e Prática Baseada em Evidências (QECPBE). METHOD: A methodological and cross-sectional study was developed. The translation and back translation was performed according to traditional standards. Principal Components Analysis with orthogonal rotation according to the Varimax method was used to verify the QECPBE's psychometric characteristics, followed by confirmatory factor analysis. Internal consistency was determined by Cronbach's alpha. Data were collected between December 2013 and February 2014. RESULTS: 358 nurses delivering care in a hospital facility in North of Portugal participated in the study. QECPBE contains 20 items and three subscales: Practice (α=0.74); Attitudes (α=0.75); Knowledge/Skills and Competencies (α=0.95), presenting an overall internal consistency of α=0.74. The tested model explained 55.86% of the variance and presented good fit: χ2(167)=520.009; p = 0.0001; χ2df=3.114; CFI=0.908; GFI=0.865; PCFI=0.798; PGFI=0.678; RMSEA=0.077 (CI90%=0.07-0.08). CONCLUSION: confirmatory factor analysis revealed the questionnaire is valid and appropriate to be used in the studied context.
Resumo:
The moisture content in concrete structures has an important influence in their behavior and performance. Several vali-dated numerical approaches adopt the governing equation for relative humidity fields proposed in Model Code 1990/2010. Nevertheless there is no integrative study which addresses the choice of parameters for the simulation of the humidity diffusion phenomenon, particularly in concern to the range of parameters forwarded by Model Code 1990/2010. A software based on a Finite Difference Method Algorithm (1D and axisymmetric cases) is used to perform sensitivity analyses on the main parameters in a normal strength concrete. Then, based on the conclusions of the sensi-tivity analyses, experimental results from nine different concrete compositions are analyzed. The software is used to identify the main material parameters that better fit the experimental data. In general, the model was able to satisfactory fit the experimental results and new correlations were proposed, particularly focusing on the boundary transfer coeffi-cient.
Resumo:
We investigate the impact of cross-delisting on firms’ financial constraints and investment sensitivities. We find that firms that cross-delisted from a U.S. stock exchange face stronger post-delisting financial constraints than their cross-listed counterparts, as measured by investment-to-cash flow sensitivity. Following a delisting, the sensitivity of investment-to-cash flow increases significantly and firms also tend to save more cash out of cash flows. Moreover, this increase appears to be primarily driven by informational frictions that constrain access to external financing. We document that information asymmetry problems are stronger for firms from countries with weaker shareholders protection and for firms from less developed capital markets.
Resumo:
We investigate the long-term performance of cross-delisted firms from U.S. stock markets. Using a sample of foreign firms listed and delisted from U.S. stock exchange markets over 2000-2012, we examine the operating performance and the long-run stock returns performance of firms post-cross-delisting. Our results suggest that cross-delisted firms have less growth opportunities than matched cross-listed firms in the long run. Moreover, firms that cross-delist after the passage of Rule 12h-6 of 2007 exhibit a significant decline in operating performance. In contrast, before the adoption of the Rule 12h-6, cross-delisted firms seem to be affected by the cost of a U.S. listing in the precross -delisting period. In addition, we provide evidence that cross-delisted firms underperform their cross-listed peers; cross-delisted firms experience negative average abnormal returns, especially in the post-delisting period.
Resumo:
IP networks are currently the major communication infrastructure used by an increasing number of applications and heterogeneous services, including voice services. In this context, the Session Initiation Protocol (SIP) is a signaling protocol widely used for controlling multimedia communication sessions such as voice or video calls over IP networks, thus performing vital functions in an extensive set of public and enter- prise solutions. However, the SIP protocol dissemination also entails some challenges, such as the complexity associated with the testing/validation processes of IMS/SIP networks. As a consequence, manual IMS/SIP testing solutions are inherently costly and time consuming tasks, being crucial to develop automated approaches in this specific area. In this perspective, this article presents an experimental approach for automated testing/validation of SIP scenarios in IMS networks. For that purpose, an automation framework is proposed allowing to replicate the configuration of SIP equipment from the pro- duction network and submit such equipment to a battery of tests in the testing network. The proposed solution allows to drastically reduce the test and validation times when compared with traditional manual approaches, also allowing to enhance testing reliability and coverage. The automation framework comprises of some freely available tools which are conveniently integrated with other specific modules implemented within the context of this work. In order to illustrate the advantages of the proposed automated framework, a real case study taken from a PT Inovação customer is presented comparing the time required to perform a manual SIP testing approach with the one time required when using the proposed auto- mated framework. The presented results clearly corroborate the advantages of using the presented framework.
Resumo:
The present paper focuses on a damage identification method based on the use of the second order spectral properties of the nodal response processes. The explicit dependence on the frequency content of the outputs power spectral densities makes them suitable for damage detection and localization. The well-known case study of the Z24 Bridge in Switzerland is chosen to apply and further investigate this technique with the aim of validating its reliability. Numerical simulations of the dynamic response of the structure subjected to different types of excitation are carried out to assess the variability of the spectrum-driven method with respect to both type and position of the excitation sources. The simulated data obtained from random vibrations, impulse, ramp and shaking forces, allowed to build the power spectrum matrix from which the main eigenparameters of reference and damage scenarios are extracted. Afterwards, complex eigenvectors and real eigenvalues are properly weighed and combined and a damage index based on the difference between spectral modes is computed to pinpoint the damage. Finally, a group of vibration-based damage identification methods are selected from the literature to compare the results obtained and to evaluate the performance of the spectral index.
Resumo:
A measurement is presented of the tt¯ inclusive production cross section in pp collisions at a center-of-mass energy of s√=8 TeV using data collected by the ATLAS detector at the CERN Large Hadron Collider. The measurement was performed in the lepton+jets final state using a data set corresponding to an integrated luminosity of 20.3 fb−1. The cross section was obtained using a likelihood discriminant fit and b-jet identification was used to improve the signal-to-background ratio. The inclusive tt¯ production cross section was measured to be 260±1(stat)+22−23(stat)±8(lumi)±4(beam) pb assuming a top-quark mass of 172.5 GeV, in good agreement with the theoretical prediction of 253+13−15 pb. The tt¯→(e,μ)+jets production cross section in the fiducial region determined by the detector acceptance is also reported.
Resumo:
Various differential cross-sections are measured in top-quark pair (tt¯) events produced in proton--proton collisions at a centre-of-mass energy of s√=7 TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of 4.6 fb−1. The differential cross-sections are presented in terms of kinematic variables of a top-quark proxy referred to as the pseudo-top-quark whose dependence on theoretical models is minimal. The pseudo-top-quark can be defined in terms of either reconstructed detector objects or stable particles in an analogous way. The measurements are performed on tt¯ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a b-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the W boson produced by the top-quark decay in events with a single charged lepton.The cross-section is measured as a function of the transverse momentum and rapidity of both the hadronic and leptonic pseudo-top-quark as well as the transverse momentum, rapidity and invariant mass of the pseudo-top-quark pair system. The measurements are corrected for detector effects and are presented within a kinematic range that closely matches the detector acceptance. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.
Resumo:
Dissertação de mestrado integrado em Psicologia