10 resultados para Fingerprints Bayesian decision theory Value of information Influence diagram
em Universidad Politécnica de Madrid
Resumo:
In this paper we propose a new method for the automatic detection and tracking of road traffic signs using an on-board single camera. This method aims to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. The proposed approach exploits a combination of different features, such as color, appearance, and tracking information. This information is introduced into a recursive Bayesian decision framework, in which prior probabilities are dynamically adapted to tracking results. This decision scheme obtains a number of candidate regions in the image, according to their HS (Hue-Saturation). Finally, a Kalman filter with an adaptive noise tuning provides the required time and spatial coherence to the estimates. Results have shown that the proposed method achieves high detection rates in challenging scenarios, including illumination changes, rapid motion and significant perspective distortion
Resumo:
A land classification method was designed for the Community of Madrid (CM), which has lands suitable for either agriculture use or natural spaces. The process started from an extensive previous CM study that contains sets of land attributes with data for 122 types and a minimum-requirements method providing a land quality classification (SQ) for each land. Borrowing some tools from Operations Research (OR) and from Decision Science, that SQ has been complemented by an additive valuation method that involves a more restricted set of 13 representative attributes analysed using Attribute Valuation Functions to obtain a quality index, QI, and by an original composite method that uses a fuzzy set procedure to obtain a combined quality index, CQI, that contains relevant information from both the SQ and the QI methods.
Resumo:
The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.
Influence of origin of the beans on protein quality and nutritive value of commercial soybean meals.
Resumo:
Chemical composition and correlations between chemical analyses and protein quality of 454 batches of SBM of 3 different origins (USA, n=168; Brazil (BRA), n=139, and Argentine (ARG), n=147) were studied. Samples were collected during a 6-yr period. SBM from USA had more CP, sucrose and stachyose and less NDF (P<0.001) than SBM from ARG and BRA. CP content was negatively related (P<0.001) with sucrose for USA meals and with NDF for ARG and BRA meals. Also, P content was positively related (P<0.01) with CP content of the meals. PDI and KOH solubility were higher (P<0.001) for USA than for ARG or BRA SBM, values that were positively related (P<0.001) with trypsin inhibitor activity of the meals. In addition, USA meals had more lys, met+cys, thr, and trp than BRA and ARG meals (P < 0.001). Per unit of CP, lys content was negatively related (P<0.001) with CP content for USA, positively for BRA, and no relations was found for ARG meals. It is concluded that nutritive values and protein quality of the meals varied widely among soybean origins. Consequently, the origin of the beans should be considered in the evaluation of the nutritive value of commercial SBM for non-ruminant animals.
Resumo:
Estudio sobre la influencia del origen de los granos en la calidad de proteínas y el valor nutritivo de las harinas de soja comerciales
Resumo:
Currently personal data gathering in online markets is done on a far larger scale and much cheaper and faster than ever before. Within this scenario, a number of highly relevant companies for whom personal data is the key factor of production have emerged. However, up to now, the corresponding economic analysis has been restricted primarily to a qualitative perspective linked to privacy issues. Precisely, this paper seeks to shed light on the quantitative perspective, approximating the value of personal information for those companies that base their business model on this new type of asset. In the absence of any systematic research or methodology on the subject, an ad hoc procedure is developed in this paper. It starts with the examination of the accounts of a number of key players in online markets. This inspection first aims to determine whether the value of personal information databases is somehow reflected in the firms’ books, and second to define performance measures able to capture this value. After discussing the strengths and weaknesses of possible approaches, the method that performs best under several criteria (revenue per data record) is selected. From here, an estimation of the net present value of personal data is derived, as well as a slight digression into regional differences in the economic value of personal information.
Resumo:
One of the most promising areas in which probabilistic graphical models have shown an incipient activity is the field of heuristic optimization and, in particular, in Estimation of Distribution Algorithms. Due to their inherent parallelism, different research lines have been studied trying to improve Estimation of Distribution Algorithms from the point of view of execution time and/or accuracy. Among these proposals, we focus on the so-called distributed or island-based models. This approach defines several islands (algorithms instances) running independently and exchanging information with a given frequency. The information sent by the islands can be either a set of individuals or a probabilistic model. This paper presents a comparative study for a distributed univariate Estimation of Distribution Algorithm and a multivariate version, paying special attention to the comparison of two alternative methods for exchanging information, over a wide set of parameters and problems ? the standard benchmark developed for the IEEE Workshop on Evolutionary Algorithms and other Metaheuristics for Continuous Optimization Problems of the ISDA 2009 Conference. Several analyses from different points of view have been conducted to analyze both the influence of the parameters and the relationships between them including a characterization of the configurations according to their behavior on the proposed benchmark.
Resumo:
decade has raised the interest among the research community on the acceptance and use of these systems by both teachers and students. At first, the implementation of LMS was based on their technical design and the adaptation of the learning processes to the virtual environment, neglecting students’ characteristics when the systems were deployed, which led to expensive and failing implementations. The Unified Theory of Acceptance and Use of Technology (UTAUT) proposes a framework which allows the study of the acceptance and use of technology that takes into consideration the students’ characteristics and how they affect the acceptance and the degree of use of educational technology. This study questions the role of the user’s attitude towards use of LMS and uses the UTAUT to examine the moderating effect of technological culture in the adoption of LMS in Spain. The results from the comparison and analysis of three different models confirm the relevance of attitude towards use as an antecedent of intention to use the system, as well as the important moderating effect of gender and technological culture. The discussion of results suggests the need for a more in-depth analysis and interrelations of cultural dimensions in the adoption of educational technologies and learning management systems
Resumo:
Land value bears significant weight in house prices in historical town centers. An essential aim for regulating the mortgage market, particularly in the financial and property crisis that countries such as Spain are undergoing, is to have at hand objective procedures for its valuation, whatever the conditions (location, construction, planning). Of all the factors contributing to house price make-up, the land is the only one whose value does not depend on acquisition cost, but rather on the location-time binomial. That is to say, the specific circumstances at that point and at the exact moment of valuation. For this reason, the most commonly applied procedure for land valuation in town centers is the use of the residual method: once the selling price of new housing in a district is known, the other necessary costs and expenses of development are deducted, including those of building and the developer’s profit. The value left is that of the land. To apply these procedures it is vital to have figures such as building costs, technical fees, tax costs, etc. But, above all, it is essential to obtain the selling price of the new housing. This is not always feasible, on account of the lack of newbuild development in this location. This shortage of information occurs in historical town cities, where urban renewal is slight due to the heritage-protection policies, and where, nevertheless there is substantial activity in the secondary market. In these circumstances, as an alternative for land valuation in consolidated urban areas, we have the adaptation of the residual method to the particular characteristics of the secondary market. To these ends, there is the proposal for the appreciation of the dwelling which follows, in a backwards direction, the application of traditional depreciation methods proposed by the various valuation manuals and guidelines. The reliability of the results obtained is analyzed by contrasting it with published figures for newly-built properties, according to different rules applied in administrative appraisals in Spain and the incidence of an eventual correction due to conservation state.
Resumo:
Pig slurry is a valuable fertilizer for crop production but at the same time its management may pose environmental risks. Slurry samples were collected from 77 commercial farms of four animal categories (gestating and lactating sows, nursery piglets and growing pigs) and analyzed for macronutrients, micronutrients, heavy metals and volatile fatty acids. Emissions of ammonia (NH3) and biochemical methane potential (BMP) were quantified. Slurry electrical conductivity, pH, dry matter content and ash content were also determined. Data analysis included an analysis of correlations among variables, the development of prediction models for gaseous emissions and the analysis of nutritional content of slurries for crop production. Descriptive information is provided in this work and shows a wide range of variability in all studied variables. Animal category affected some physicochemical parameters, probably as a consequence of different slurry management and use of cleaning water. Slurries from gestating sows and growing pigs tended to be more concentrated in nutrients, whereas the slurry from lactating sows and nursery piglets tended to be more diluted. Relevant relationships were found among slurry characteristics expressed in fresh basis and gas emissions. Predictive models using on-farm measurable parameters were obtained for NH3 (R2 = 0.51) and CH4