21 resultados para procurement measuring
em Universidad Politécnica de Madrid
Resumo:
We present an evaluation of a spoken language dialogue system with a module for the management of userrelated information, stored as user preferences and privileges. The flexibility of our dialogue management approach, based on Bayesian Networks (BN), together with a contextual information module, which performs different strategies for handling such information, allows us to include user information as a new level into the Context Manager hierarchy. We propose a set of objective and subjective metrics to measure the relevance of the different contextual information sources. The analysis of our evaluation scenarios shows that the relevance of the short-term information (i.e. the system status) remains pretty stable throughout the dialogue, whereas the dialogue history and the user profile (i.e. the middle-term and the long-term information, respectively) play a complementary role, evolving their usefulness as the dialogue evolves.
Resumo:
In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one.
Resumo:
We demonstrate a simple self-referenced single-shot method for simultaneously measuring two different arbitrary pulses, which can potentially be complex and also have very different wavelengths. The method is a variation of cross-correlation frequency-resolved optical gating (XFROG) that we call double-blind (DB) FROG. It involves measuring two spectrograms, both of which are obtained simultaneously in a single apparatus. DB FROG retrieves both pulses robustly by using the standard XFROG algorithm, implemented alternately on each of the traces, taking one pulse to be ?known? and solving for the other. We show both numerically and experimentally that DB FROG using a polarization-gating beam geometry works reliably and appears to have no nontrivial ambiguities.
Resumo:
The need for the use of another surveillance system when radar cannot be used is the reason for the development of the Multilateration (MLT) Systems. However, there are many systems that operate in the L-Band (960-1215MHz) that could produce interference between systems. At airports, some interference has been detected between transmissions of MLT systems (1030MHz and 1090MHz) and Distance Measuring Equipment (DME) (960-1215MHz).
Resumo:
The aim of this study is to evaluate the effects obtained after applying two active learning methodologies (cooperative learning and project based learning) to the achievement of the competence problem solving. This study was carried out at the Technical University of Madrid, where these methodologies were applied to two Operating Systems courses. The first hypothesis tested was whether the implementation of active learning methodologies favours the achievement of ?problem solving?. The second hypothesis was focused on testing if students with higher rates in problem solving competence obtain better results in their academic performance. The results indicated that active learning methodologies do not produce any significant change in the generic competence ?problem solving? during the period analysed. Concerning this, we consider that students should work with these methodologies for a longer period, besides having a specific training. Nevertheless, a close correlation between problem solving self appraisal and academic performance has been detected.
Resumo:
Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behaviour has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge in order to achieve safe and cost-effective structures. As ballast operations are a key life cycle cost, and excessive vibrations increase the need of ballast regulation in order to ensure track geometry, special attention is paid to accelerations, the values of which should be limited to avoid track instability according to Eurocode. In this paper, the data obtained during on site measurements on culverts belonging to a Spanish high-speed train line are presented. A set of six rectangular-shaped, closed-frame underpasses were monitored under traffic loading. Acceleration records at different points of the structures are presented and discussed. They reveal a non-uniform dynamic response of the roof-slab, with the highest observed values below the occupied track. Also, they indicate that the dynamic response is important up to frequencies higher than those usually observed for standard simply supported bridges. Finally, they are used to obtain a heuristic rule to estimate acceleration levels on the roof-slab.
Resumo:
Different parameters are used to quantify the maturity of fruits at or near harvest (shape, color, flesh texture and internal composition). Flesh firmness is a critical handling parameter for fruits such as peach, pear and apple. Results of previous studies conducted by different researchers have shown that impact techniques can be used to evaluate firmness of fruits. A prototype impact system for firmness sorting of fruits was developed by Chen and Ruiz-Altisent (Chen et al, 1996). This sensor was mounted and tested successfully on a 3 m section of a commercial conveyor belt (Chen et al, 1998). This is a further development of the on-line impact system for firmness sorting of fruits. The design of the sensor has been improved and it has been mounted on a experimental fruit packing line (Ortiz-Cañavate et al 1999).
Resumo:
The Semantics Difficulty Model (SDM) is a model that measures the difficult of introducing semantics technology into a company. SDM manages three descriptions of stages, which we will refer to as ?snapshots?: a company semantic snapshot, data snapshot and semantic application snapshot. Understanding a priory the complexity of introducing semantics into a company is important because it allows the organization to take early decisions, thus saving time and money, mitigating risks and improving innovation, time to market and productivity. SDM works by measuring the distance between each initial snapshot and its reference models (the company semantic snapshots reference model, data snapshots reference model, and the semantic application snapshots reference model) with Euclidian distances. The difficulty level will be "not at all difficult" when the distance is small, and becomes "extremely difficult" when the the distance is large. SDM has been tested experimentally with 2000 simulated companies with arrangements and several initial stages. The output is measured by five linguistic values: "not at all difficult, slightly difficult, averagely difficult, very difficult and extremely difficult". As the preliminary results of our SDM simulation model indicate, transforming a search application into integrated data from different sources with semantics is a "slightly difficult", in contrast with data and opinion extraction applications for which it is "very difficult".
Resumo:
Measuring skin temperature (TSK) provides important information about the complex thermal control system and could be interesting when carrying out studies about thermoregulation. The most common method to record TSK involves thermocouples at specific locations; however, the use of infrared thermal imaging (IRT) has increased. The two methods use different physical processes to measure TSK, and each has advantages and disadvantages. Therefore, the objective of this study was to compare the mean skin temperature (MTSK) measurements using thermocouples and IRT in three different situations: pre-exercise, exercise and post-exercise. Analysis of the residual scores in Bland-Altman plots showed poor agreement between the MTSK obtained using thermocouples and those using IRT. The averaged error was -0.75 °C during pre-exercise, 1.22 °C during exercise and -1.16 °C during post-exercise, and the reliability between the methods was low in the pre- (ICC = 0.75 [0.12 to 0.93]), during (ICC = 0.49 [-0.80 to 0.85]) and post-exercise (ICC = 0.35 [-1.22 to 0.81] conditions. Thus, there is poor correlation between the values of MTSK measured by thermocouples and IRT pre-exercise, exercise and post-exercise, and low reliability between the two forms of measurement.
Resumo:
One of the main limiting factors in the development of new magnesium (Mg) alloys with enhanced mechanical behavior is the need to use vast experimental campaigns for microstructure and property screening. For example, the influence of new alloying additions on the critical resolved shear stresses (CRSSs) is currently evaluated by a combination of macroscopic single-crystal experiments and crystal plasticity finite-element simulations (CPFEM). This time-consuming process could be considerably simplified by the introduction of high-throughput techniques for efficient property testing. The aim of this paper is to propose a new and fast, methodology for the estimation of the CRSSs of hexagonal close-packed metals which, moreover, requires small amounts of material. The proposed method, which combines instrumented nanoindentation and CPFEM modeling, determines CRSS values by comparison of the variation of hardness (H) for different grain orientations with the outcome of CPFEM. This novel approach has been validated in a rolled and annealed pure Mg sheet, whose H variation with grain orientation has been successfully predicted using a set of CRSSs taken from recent crystal plasticity simulations of single-crystal experiments. Moreover, the proposed methodology has been utilized to infer the effect of the alloying elements of an MN11 (Mg–1% Mn–1% Nd) alloy. The results support the hypothesis that selected rare earth intermetallic precipitates help to bring the CRSS values of basal and non-basal slip systems closer together, thus contributing to the reduced plastic anisotropy observed in these alloys
Resumo:
The design and development of a new method for performing fracture toughness tests under impulsive loadings using explosives is presented. The experimental set-up was complemented with pressure transducers and strain gauges in order to measure, respectively, the blast wave that reached the specimen and the loading history. Fracture toughness tests on a 7017-T73 aluminium alloy were carried out by using this device under impulsive loadings. Previous studies reported that such aluminium alloy had very little strain rate sensitivity, which made it an ideal candidate for comparison at different loading rates. The fracture-initiation toughness values of the 7017-T73 aluminium alloy obtained at impulsive loadings did not exhibit a significant variation from the cases studied at lower loading rates. Therefore, the method and device developed for measuring the dynamic fracture-initiation toughness under impulsive loadings was considered suitable for such a purpose
Resumo:
To achieve sustainability in the area of transport we need to view the decision-making process as a whole and consider all the most important socio-economic and environmental aspects involved. Improvements in transport infrastructures have a positive impact on regional development and significant repercussions on the economy, as well as affecting a large number of ecological processes. This article presents a DSS to assess the territorial effects of new linear transport infrastructures based on the use of GIS. The TITIM ? Transport Infrastructure Territorial Impact Measurement ? GIS tool allows these effects to be calculated by evaluating the improvement in accessibility, loss of landscape connectivity, and the impact on other local territorial variables such as landscape quality, biodiversity and land-use quality. The TITIM GIS tool assesses these variables automatically, simply by entering the required inputs, and thus avoiding the manual reiteration and execution of these multiple processes. TITIM allows researchers to use their own GIS databases as inputs, in contrast with other tools that use official or predefined maps. The TITIM GIS-tool is tested by application to six HSR projects in the Spanish Strategic Transport and Infrastructure Plan 2005?2020 (PEIT). The tool creates all 65 possible combinations of these projects, which will be the real test scenarios. For each one, the tool calculates the accessibility improvement, the landscape connectivity loss, and the impact on the landscape, biodiversity and land-use quality. The results reveal which of the HSR projects causes the greatest benefit to the transport system, any potential synergies that exist, and help define a priority for implementing the infrastructures in the plan
Resumo:
The first feasibility study of using dual-probe heated fiber optics with distributed temperature sensing to measure soil volumetric heat capacity and soil water content is presented. Although results using different combinations of cables demonstrate feasibility, further work is needed to gain accuracy, including a model to account for the finite dimension and the thermal influence of the probes. Implementation of the dual-probe heat-pulse (DPHP) approach for measurement of volumetric heat capacity (C) and water content (θ) with distributed temperature sensing heated fiber optic (FO) systems presents an unprecedented opportunity for environmental monitoring (e.g., simultaneous measurement at thousands of points). We applied uniform heat pulses along a FO cable and monitored the thermal response at adjacent cables. We tested the DPHP method in the laboratory using multiple FO cables at a range of spacings. The amplitude and phase shift in the heat signal with distance was found to be a function of the soil volumetric heat capacity. Estimations of C at a range of moisture contents (θ = 0.09– 0.34 m3 m−3) suggest the feasibility of measurement via responsiveness to the changes in θ, although we observed error with decreasing soil water contents (up to 26% at θ = 0.09 m3 m−3). Optimization will require further models to account for the finite radius and thermal influence of the FO cables. Although the results indicate that the method shows great promise, further study is needed to quantify the effects of soil type, cable spacing, and jacket configurations on accuracy.
Resumo:
Since 2010 the Industrial Engineering School at Universidad Politécnica de Madrid (ETSII UPM) has its Plan Study accredited by ABET. Since then a big motivation has been promoted from the management team encouraging teachers to work on the measurement and strengthening of student¿s competences. Generic skills or behavior acquired significant importance in the workplace, particularly in relation to project management. Because of this, and framed within the requirements of the European Higher Education Area (EHEA), the curriculum of the new degrees are being developed under the competence-based learning. This situation leads to the need to have a clear measurement tool skills as a basis for developing them within the curriculum. A group of multidisciplinary teachers have been working together during two years to design measuring instruments valid for engineering students.
Resumo:
There exist different ways for defining a welfare function. Traditionally, welfare economic theory foundation is based on the Net Present Value (NPV) calculation where the time dependent preferences of considered agents are taken into account. However, the time preferences, remains a controversial subject. Currently, the traditional approach employs a unique discount rate for various agents. Nevertheless, this way of discounting appears inconsistent with sustainable development. New research work suggests that the discount rate may not be a homogeneous value. The discount rates may change following the individual’s preferences. A significant body of evidence suggests that people do not behave following a constant discount rate. In fact, UK Government has quickly recognized the power of the arguments for time-varying rates, as it has done in its official guidance to Ministries on the appraisal of investments and policies. Other authors deal with not just time preference but with uncertainty about future income (precautionary saving). In a situation in which economic growth rates are similar across time periods, the rationale for declining social optimal discount rates is driven by the preferences of the individuals in the economy, rather than expectations of growth. However, these approaches have been mainly focused on long-term policies where intergenerational risks may appear. The traditional cost-benefit analysis (CBA) uses a unique discount rate derived from market interest rates or investment rates of return for discounting the costs and benefits of all social agents included in the CBA. However, recent literature showed that a more adequate measure of social benefit is possible by using different discount rates including inter-temporal preferences rate of users, private investment discount rate and intertemporal preferences rate of government. Actually, the costs of opportunity may differ amongst individuals, firms, governments, or society in general, as do the returns on savings. In general, the firms or operators require an investment rate linked to the current return on savings, while the discount rate of consumers-users depends on their time preferences with respect of the current and the future consumption, as well as society can take into account the intergenerational well-being, adopting a lower discount rate for today’s generation. Time discount rate of social actors (users, operators, government and society) places a lower value in a future gain, but the uncertainty about future income strongly determines the individual preferences. These time and uncertainty depends on preferences and should be integrated into a transport policy formulation that may have significant social impacts. The discount rate of a user cannot be the same than the operator’s discount rate. The preferences of both are different. In addition, another school of thought suggests that people, such as a social group, may have different attitudes towards future costs and benefits. Particularly, the users have different discount rates related to their income. Some research work tried to modify user discount rates using a compensating weight which represents the inverse of household income level. The inter-temporal preferences are a proxy of the willingness to pay during the time. Its consideration is important in order to make acceptable or not a policy or investment