973 resultados para Weighted objectives method
Resumo:
The general objective of this study was to evaluate the ordered weighted averaging (OWA) method, integrated to a geographic information systems (GIS), in the definition of priority areas for forest conservation in a Brazilian river basin, aiming at to increase the regional biodiversity. We demonstrated how one could obtain a range of alternatives by applying OWA, including the one obtained by the weighted linear combination method and, also the use of the analytic hierarchy process (AHP) to structure the decision problem and to assign the importance to each criterion. The criteria considered important to this study were: proximity to forest patches; proximity among forest patches with larger core area; proximity to surface water; distance from roads: distance from urban areas; and vulnerability to erosion. OWA requires two sets of criteria weights: the weights of relative criterion importance and the order weights. Thus, Participatory Technique was used to define the criteria set and the criterion importance (based in AHP). In order to obtain the second set of weights we considered the influence of each criterion, as well as the importance of each one, on this decision-making process. The sensitivity analysis indicated coherence among the criterion importance weights, the order weights, and the solution. According to this analysis, only the proximity to surface water criterion is not important to identify priority areas for forest conservation. Finally, we can highlight that the OWA method is flexible, easy to be implemented and, mainly, it facilitates a better understanding of the alternative land-use suitability patterns. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Explicitly correlated coupled-cluster calculations of intermolecular interaction energies for the S22 benchmark set of Jurecka, Sponer, Cerny, and Hobza (Chem. Phys. Phys. Chem. 2006, 8, 1985) are presented. Results obtained with the recently proposed CCSD(T)-F12a method and augmented double-zeta basis sets are found to be in very close agreement with basis set extrapolated conventional CCSD(T) results. Furthermore, we propose a dispersion-weighted MP2 (DW-MP2) approximation that combines the good accuracy of MP2 for complexes with predominately electrostatic bonding and SCS-MP2 for dispersion-dominated ones. The MP2-F12 and SCS-MP2-F12 correlation energies are weighted by a switching function that depends on the relative HF and correlation contributions to the interaction energy. For the S22 set, this yields a mean absolute deviation of 0.2 kcal/mol from the CCSD(T)-F12a results. The method, which allows obtaining accurate results at low cost, is also tested for a number of dimers that are not in the training set.
Resumo:
In this work a superposition technique for designing gradient coils for the purpose of magnetic resonance imaging is outlined, which uses an optimized weight function superimposed upon an initial winding similar to that obtained from the target field method to generate the final wire winding. This work builds on the preliminary work performed in Part I on designing planar insertable gradient coils for high resolution imaging. The proposed superposition method for designing gradient coils results in coil patterns with relatively low inductances and the gradient coils can be used as inserts into existing magnetic resonance imaging hardware. The new scheme has the capacity to obtain images faster with more detail due to the deliver of greater magnetic held gradients. The proposed method for designing gradient coils is compared with a variant of the state-of-the-art target field method for planar gradient coils designs, and it is shown that the weighted superposition approach outperforms the well-known the classical method.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This study has three main objectives. First, it develops a generalization of the commonly used EKS method to multilateral price comparisons. It is shown that the EKS system can be generalized so that weights can be attached to each of the link comparisons used in the EKS computations. These weights can account for differing levels of reliability of the underlying binary comparisons. Second, various reliability measures and corresponding weighting schemes are presented and their merits discussed. Finally, these new methods are applied to an international data set of manufacturing prices from the ICOP project. Although theoretically superior, it appears that the empirical impact of the weighted EKS method is generally small compared to the unweighted EKS. It is also found that this impact is larger when it is applied at lower levels of aggregation. Finally, the importance of using sector specific PPPs in assessing relative levels of manufacturing productivity is indicated.
Resumo:
The country-product-dummy (CPD) method, originally proposed in Summers (1973), has recently been revisited in its weighted formulation to handle a variety of data related situations (Rao and Timmer, 2000, 2003; Heravi et al., 2001; Rao, 2001; Aten and Menezes, 2002; Heston and Aten, 2002; Deaton et al., 2004). The CPD method is also increasingly being used in the context of hedonic modelling instead of its original purpose of filling holes in Summers (1973). However, the CPD method is seen, among practitioners, as a black box due to its regression formulation. The main objective of the paper is to establish equivalence of purchasing power parities and international prices derived from the application of the weighted-CPD method with those arising out of the Rao-system for multilateral comparisons. A major implication of this result is that the weighted-CPD method would then be a natural method of aggregation at all levels of aggregation within the context of international comparisons.
Resumo:
Objectives: To examine associations between nutrition screening checklists and the health of older women. Design: Cross-sectional postal survey including measures of health and health service utilisation, as well as the Australian Nutrition Screening Initiative (ANSI), adapted from the Nutrition Screening Initiative (NSI). Setting: Australia, 1996. Subjects: In total, 12 939 women aged 70-75 years randomly selected as part of the Australian Longitudinal Study on Women's Health. Results: Responses to individual items in the ANSI checklist, and ANSI and NSI scores, were associated with measures of health and health service utilisation. Women with high ANSI and NSI scores had poorer physical and mental health, higher health care utilisation and were less likely to be in the acceptable weight range. The performance of an unweighted score (TSI) was also examined and showed similar results. Whereas ANSI classified 30% of the women as 'high-risk', only 13% and 12% were classified as 'high-risk' by the NSI and TSI, respectively. However, for identifying women with body mass index outside the acceptable range, sensitivity, specificity and positive predictive values for all of these checklists were less than 60%. Conclusions: Higher scores on both the ANSI and NSI are associated with poorer health. The simpler unweighted method of scoring the ANSI (TSI) showed better discrimination for the identification of 'at risk' women than the weighted ANSI method. The predictive value of individual items and the checklist scores need to be examined longitudinally.
Resumo:
Tässä diplomityössä määritellään varmistusjärjestelmän simulointimalli eli varmistusmalli. Varmistusjärjestelmän toiminta optimoidaan kyseisen varmistusmallin avulla. Optimoinnin tavoitteena on parantaa varmistusjärjestelmän tehokkuutta. Parannusta etsitään olemassa olevien varmistusjärjestelmän resurssien maksimaalisella hyödyntämisellä. Varmistusmalli optimoidaan evoluutioalgoritmin avulla. Optimoinnissa on useita tavoitteita, jotka ovat ristiriidassa keskenään. Monitavoiteoptimointiongelma muunnetaan yhden tavoitteen optimointiongelmaksi muodostamalla tavoitefunktio painotetun summan menetelmän avulla. Rinnakkain edellisen menetelmän kanssa käytetään myös Pareto-optimointia. Pareto-optimaalisen rintaman pisteiden etsintä ohjataan lähelle painotetun summan menetelmän optimipistettä. Evoluutioalgoritmin toteutuksessa käytetään hyväksi varmistusjärjestelmiin liittyvää ongelmakohtaista tietoa. Työn tuloksena saadaan varmistusjärjestelmän simulointi- sekä optimointityökalu. Simulointityökalua käytetään kartoittamaan nykyisen varmistusjärjestelmän toimivuutta. Optimoinnin avulla tehostetaan varmistusjärjestelmän toimintaa. Työkalua voidaan käyttää myös uusien varmistusjärjestelmien suunnittelussa sekä nykyisten varmistusjärjestelmien laajentamisessa.
Resumo:
Muchas de las nuevas aplicaciones emergentes de Internet tales como TV sobre Internet, Radio sobre Internet,Video Streamming multi-punto, entre otras, necesitan los siguientes requerimientos de recursos: ancho de banda consumido, retardo extremo-a-extremo, tasa de paquetes perdidos, etc. Por lo anterior, es necesario formular una propuesta que especifique y provea para este tipo de aplicaciones los recursos necesarios para su buen funcionamiento. En esta tesis, proponemos un esquema de ingeniería de tráfico multi-objetivo a través del uso de diferentes árboles de distribución para muchos flujos multicast. En este caso, estamos usando la aproximación de múltiples caminos para cada nodo egreso y de esta forma obtener la aproximación de múltiples árboles y a través de esta forma crear diferentes árboles multicast. Sin embargo, nuestra propuesta resuelve la fracción de la división del tráfico a través de múltiples árboles. La propuesta puede ser aplicada en redes MPLS estableciendo rutas explícitas en eventos multicast. En primera instancia, el objetivo es combinar los siguientes objetivos ponderados dentro de una métrica agregada: máxima utilización de los enlaces, cantidad de saltos, el ancho de banda total consumido y el retardo total extremo-a-extremo. Nosotros hemos formulado esta función multi-objetivo (modelo MHDB-S) y los resultados obtenidos muestran que varios objetivos ponderados son reducidos y la máxima utilización de los enlaces es minimizada. El problema es NP-duro, por lo tanto, un algoritmo es propuesto para optimizar los diferentes objetivos. El comportamiento que obtuvimos usando este algoritmo es similar al que obtuvimos con el modelo. Normalmente, durante la transmisión multicast los nodos egresos pueden salir o entrar del árbol y por esta razón en esta tesis proponemos un esquema de ingeniería de tráfico multi-objetivo usando diferentes árboles para grupos multicast dinámicos. (en el cual los nodos egresos pueden cambiar durante el tiempo de vida de la conexión). Si un árbol multicast es recomputado desde el principio, esto podría consumir un tiempo considerable de CPU y además todas las comuicaciones que están usando el árbol multicast serán temporalmente interrumpida. Para aliviar estos inconvenientes, proponemos un modelo de optimización (modelo dinámico MHDB-D) que utilice los árboles multicast previamente computados (modelo estático MHDB-S) adicionando nuevos nodos egreso. Usando el método de la suma ponderada para resolver el modelo analítico, no necesariamente es correcto, porque es posible tener un espacio de solución no convexo y por esta razón algunas soluciones pueden no ser encontradas. Adicionalmente, otros tipos de objetivos fueron encontrados en diferentes trabajos de investigación. Por las razones mencionadas anteriormente, un nuevo modelo llamado GMM es propuesto y para dar solución a este problema un nuevo algoritmo usando Algoritmos Evolutivos Multi-Objetivos es propuesto. Este algoritmo esta inspirado por el algoritmo Strength Pareto Evolutionary Algorithm (SPEA). Para dar una solución al caso dinámico con este modelo generalizado, nosotros hemos propuesto un nuevo modelo dinámico y una solución computacional usando Breadth First Search (BFS) probabilístico. Finalmente, para evaluar nuestro esquema de optimización propuesto, ejecutamos diferentes pruebas y simulaciones. Las principales contribuciones de esta tesis son la taxonomía, los modelos de optimización multi-objetivo para los casos estático y dinámico en transmisiones multicast (MHDB-S y MHDB-D), los algoritmos para dar solución computacional a los modelos. Finalmente, los modelos generalizados también para los casos estático y dinámico (GMM y GMM Dinámico) y las propuestas computacionales para dar slución usando MOEA y BFS probabilístico.
Resumo:
The Extended Weighted Residuals Method (EWRM) is applied to investigate the effects of viscous dissipation on the thermal development of forced convection in a porous-saturated duct of rectangular cross-section with isothermal boundary condition. The Brinkman flow model is employed for determination of the velocity field. The temperature in the flow field was computed by utilizing the Green’s function solution based on the EWRM. Following the computation of the temperature field, expressions are presented for the local Nusselt number and the bulk temperature as a function of the dimensionless longitudinal coordinate. In addition to the aspect ratio, the other parameters included in this computation are the Darcy number, viscosity ratio, and the Brinkman number.
Resumo:
Heat transfer and entropy generation analysis of the thermally developing forced convection in a porous-saturated duct of rectangular cross-section, with walls maintained at a constant and uniform heat flux, is investigated based on the Brinkman flow model. The classical Galerkin method is used to obtain the fully developed velocity distribution. To solve the thermal energy equation, with the effects of viscous dissipation being included, the Extended Weighted Residuals Method (EWRM) is applied. The local (three dimensional) temperature field is solved by utilizing the Green’s function solution based on the EWRM where symbolic algebra is being used for convenience in presentation. Following the computation of the temperature field, expressions are presented for the local Nusselt number and the bulk temperature as a function of the dimensionless longitudinal coordinate, the aspect ratio, the Darcy number, the viscosity ratio, and the Brinkman number. With the velocity and temperature field being determined, the Second Law (of Thermodynamics) aspect of the problem is also investigated. Approximate closed form solutions are also presented for two limiting cases of MDa values. It is observed that decreasing the aspect ratio and MDa values increases the entropy generation rate.
Resumo:
Agent-based computational economics is becoming widely used in practice. This paperexplores the consistency of some of its standard techniques. We focus in particular on prevailingwholesale electricity trading simulation methods. We include different supply and demandrepresentations and propose the Experience-Weighted Attractions method to include severalbehavioural algorithms. We compare the results across assumptions and to economic theorypredictions. The match is good under best-response and reinforcement learning but not underfictitious play. The simulations perform well under flat and upward-slopping supply bidding,and also for plausible demand elasticity assumptions. Learning is influenced by the number ofbids per plant and the initial conditions. The overall conclusion is that agent-based simulationassumptions are far from innocuous. We link their performance to underlying features, andidentify those that are better suited to model wholesale electricity markets.
Resumo:
The goal of this project was to provide an objective methodology to support public agencies and railroads in making decisions related to consolidation of at-grade rail-highway crossings. The project team developed a weighted-index method and accompanying Microsoft Excel spreadsheet based tool to help evaluate and prioritize all public highway-rail grade crossings systematically from a possible consolidation impact perspective. Factors identified by stakeholders as critical were traffic volume, heavy-truck traffic volume, proximity to emergency medical services, proximity to schools, road system, and out-of-distance travel. Given the inherent differences between urban and rural locations, factors were considered, and weighted, differently, based on crossing location. Application of a weighted-index method allowed for all factors of interest to be included and for these factors to be ranked independently, as well as weighted according to stakeholder priorities, to create a single index. If priorities change, this approach also allows for factors and weights to be adjusted. The prioritization generated by this approach may be used to convey the need and opportunity for crossing consolidation to decision makers and stakeholders. It may also be used to quickly investigate the feasibility of a possible consolidation. Independently computed crossing risk and relative impact of consolidation may be integrated and compared to develop the most appropriate treatment strategies or alternatives for a highway-rail grade crossing. A crossing with limited- or low-consolidation impact but a high safety risk may be a prime candidate for consolidation. Similarly, a crossing with potentially high-consolidation impact as well as high risk may be an excellent candidate for crossing improvements or grade separation. The results of the highway-rail grade crossing prioritization represent a consistent and quantitative, yet preliminary, assessment. The results may serve as the foundation for more rigorous or detailed analysis and feasibility studies. Other pertinent site-specific factors, such as safety, maintenance costs, economic impacts, and location-specific access and characteristics should be considered.
Resumo:
This thesis presents two graphical user interfaces for the project DigiQ - Fusion of Digital and Visual Print Quality, a project for computationally modeling the subjective human experience of print quality by measuring the image with certain metrics. After presenting the user interfaces, methods for reducing the computation time of several of the metrics and the image registration process required to compute the metrics, and details of their performance are given. The weighted sample method for the image registration process was able to signifigantly decrease the calculation times while resulting in some error. The random sampling method for the metrics greatly reduced calculation time while maintaining excellent accuracy, but worked with only two of the metrics.