987 resultados para multi-region


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this work is to present an alternative boundary element method (BEM) formulation for the static analysis of three-dimensional non-homogeneous isotropic solids. These problems can be solved using the classical boundary element formulation, analyzing each subregion separately and then joining them together by introducing equilibrium and displacements compatibility. Establishing relations between the displacement fundamental solutions of the different domains, the alternative technique proposed in this paper allows analyzing all the domains as one unique solid, not requiring equilibrium or compatibility equations. This formulation also leads to a smaller system of equations when compared to the usual subregion technique, and the results obtained are even more accurate. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents a non-linear boundary element formulation applied to analysis of contact problems. The boundary element method (BEM) is known as a robust and accurate numerical technique to handle this type of problem, because the contact among the solids occurs along their boundaries. The proposed non-linear formulation is based on the use of singular or hyper-singular integral equations by BEM, for multi-region contact. When the contact occurs between crack surfaces, the formulation adopted is the dual version of BEM, in which singular and hyper-singular integral equations are defined along the opposite sides of the contact boundaries. The structural non-linear behaviour on the contact is considered using Coulomb`s friction law. The non-linear formulation is based on the tangent operator in which one uses the derivate of the set of algebraic equations to construct the corrections for the non-linear process. This implicit formulation has shown accurate as the classical approach, however, it is faster to compute the solution. Examples of simple and multi-region contact problems are shown to illustrate the applicability of the proposed scheme. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inverse analysis is currently an important subject of study in several fields of science and engineering. The identification of physical and geometric parameters using experimental measurements is required in many applications. In this work a boundary element formulation to identify boundary and interface values as well as material properties is proposed. In particular the proposed formulation is dedicated to identifying material parameters when a cohesive crack model is assumed for 2D problems. A computer code is developed and implemented using the BEM multi-region technique and regularisation methods to perform the inverse analysis. Several examples are shown to demonstrate the efficiency of the proposed model. (C) 2010 Elsevier Ltd. All rights reserved,

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In previous work we have applied the environmental multi-region input-output (MRIO) method proposed by Turner et al (2007) to examine the ‘CO2 trade balance’ between Scotland and the Rest of the UK. In McGregor et al (2008) we construct an interregional economy-environment input-output (IO) and social accounting matrix (SAM) framework that allows us to investigate methods of attributing responsibility for pollution generation in the UK at the regional level. This facilitates analysis of the nature and significance of environmental spillovers and the existence of an environmental ‘trade balance’ between regions. While the existence of significant data problems mean that the quantitative results of this study should be regarded as provisional, we argue that the use of such a framework allows us to begin to consider questions such as the extent to which a devolved authority like the Scottish Parliament can and should be responsible for contributing to national targets for reductions in emissions levels (e.g. the UK commitment to the Kyoto Protocol) when it is limited in the way it can control emissions, particularly with respect to changes in demand elsewhere in the UK. However, while such analysis is useful in terms of accounting for pollution flows in the single time period that the accounts relate to, it is limited when the focus is on modelling the impacts of any marginal change in activity. This is because a conventional demand-driven IO model assumes an entirely passive supply-side in the economy (i.e. all supply is infinitely elastic) and is further restricted by the assumption of universal Leontief (fixed proportions) technology implied by the use of the A and multiplier matrices. In this paper we argue that where analysis of marginal changes in activity is required, a more flexible interregional computable general equilibrium approach that models behavioural relationships in a more realistic and theory-consistent manner, is more appropriate and informative. To illustrate our analysis, we compare the results of introducing a positive demand stimulus in the UK economy using both IO and CGE interregional models of Scotland and the rest of the UK. In the case of the latter, we demonstrate how more theory consistent modelling of both demand and supply side behaviour at the regional and national levels affect model results, including the impact on the interregional CO2 ‘trade balance’.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of multi-region environmental input-output (IO) analysis to the problem of accounting for emissions generation (and/or resource use) under different accounting principles has become increasingly common in the ecological and environmental economics literature in particular, with applications at the international and interregional subnational level. However, while environmental IO analysis is invaluable in accounting for pollution flows in the single time period that the accounts relate to, it is limited when the focus is on modelling the impacts of any marginal change in activity. This is because a conventional demand-driven IO model assumes an entirely passive supply-side in the economy (i.e. all supply is infinitely elastic) and is further restricted by the assumption of universal Leontief (fixed proportions) technology implied by the use of the A and multiplier matrices. Where analysis of marginal changes in activity is required, extension from an IO accounting framework to a more flexible interregional computable general equilibrium (CGE) approach, where behavioural relationships can be modelled in a more realistic and theory-consistent manner, is appropriate. Our argument is illustrated by comparing the results of introducing a positive demand stimulus in the UK economy using IO and CGE interregional models of Scotland and the rest of the UK. In the case of the latter, we demonstrate how more theory consistent modelling of both demand and supply side behaviour at the regional and national levels effect model results, including the impact on the interregional CO2 ‘trade balance’.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we attempt an empirical application of the multi-region input-output (MRIO) method in order to enumerate the pollution content of interregional trade flows between five Mid-West regions/states in the US –Illinois, Indiana, Iowa, Michigan and Wisconsin – and the rest of the US. This allows us to analyse some very important issues in terms of the nature and significance of interregional environmental spillovers within the US Mid-West and the existence of pollution ‘trade balances’ between states. Our results raise questions in terms of the extent to which authorities at State level can control local emissions where they are limited in the way some emissions can be controlled, particularly with respect to changes in demand elsewhere in the Mid-West and US. This implies a need for policy co-ordination between national and state level authorities in the US to meet emissions reductions targets. The existence of an environmental trade balances between states also raises issues in terms of net losses/gains in terms of pollutants as a result of interregional trade within the US and whether, if certain activities can be carried out using less polluting technology in one region relative to others, it is better for the US as a whole if this type of relationship exists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este estudio se evalúan los efectos estáticos de una reciente propuesta de reforma a la actual estructura arancelaria aplicada por Colombia. La propuesta, realizada por el gobierno, estaba dirigida a racionalizar dicha estructura y a propiciar un mejoramiento de la productividad de la industria, pero fue desechada en vista de la oposición del sector privado a la misma. La evaluación se hace mediante un modelo de equilibrio general computable multipaís. Sus resultados indican que la no implementación de la propuesta implica renunciar a las ganancias de bienestar esperables de una combinación de ésta y de la implementación de los acuerdos que hacen parte de la agenda comercial del país.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A través de una simulación llevada a cabo con GTAP, este documento presenta una evaluación preliminar del impacto potencial que el Área de Libre Comercio de las Américas tendría sobre la Comunidad Andina de Naciones. Mantenido por la Universidad de Purdue, el GTAP es un modelo multiregional de equilibrio general, ampliamente usado para el análisis de temas de economía internacional. El experimento llevado a cabo tiene lugar en un ambiente de competencia perfecta y rendimientos constantes a escala y consiste en la completa eliminación de aranceles a las importaciones de bienes entre los países del Hemisferio Occidental. Los resultados muestran la presencia de modestas pero positivas ganancias netas de bienestar para la Comunidad Andina, generadas fundamentalmente por mejoras en la asignación de recursos. Movimientos desfavorables en los términos de intercambio y el efecto de la desviación de comercio con respecto a terceros países, reducen considerablemente las ganancias potenciales de bienestar. De la misma forma, la existencia de distorsiones económicas al interior de la Comunidad Andina tiene un efecto negativo sobre el bienestar. El patrón de comercio aumenta su grado de concentración en el comercio bilateral con los Estados Unidos y la remuneración real a los factores productivos presenta mejoras con la implementación de la zona de libre comercio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces the special issue of Climatic Change on the QUEST-GSI project, a global-scale multi-sectoral assessment of the impacts of climate change. The project used multiple climate models to characterise plausible climate futures with consistent baseline climate and socio-economic data and consistent assumptions, together with a suite of global-scale sectoral impacts models. It estimated impacts across sectors under specific SRES emissions scenarios, and also constructed functions relating impact to change in global mean surface temperature. This paper summarises the objectives of the project and its overall methodology, outlines how the project approach has been used in subsequent policy-relevant assessments of future climate change under different emissions futures, and summarises the general lessons learnt in the project about model validation and the presentation of multi-sector, multi-region impact assessments and their associated uncertainties to different audiences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objetivo deste estudo é avaliar, por meio de um modelo de equilíbrio geral computável, multi-setorial e multi-regional, os impactos de uma redução das tarifas dos bens não agrícolas sobre a economia brasileira, a partir da Fórmula Suíça, com diferentes coeficientes. O modelo de equilíbrio geral utilizado é o Global Trade Analysis Project (GTAP) e os cortes de tarifas foram estimados a partir de dados do MAcMap. Além dos impactos macroeconômicos e setoriais, testou-se a sensibilidade do modelo ao aumento das elasticidades de Armington e à implementação de liberalização tarifária agrícola.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVES The SOURCE XT Registry (Edwards SAPIEN XT Aortic Bioprosthesis Multi-Region Outcome Registry) assessed the use and clinical outcomes with the SAPIEN XT (Edwards Lifesciences, Irvine, California) valve in the real-world setting. BACKGROUND Transcatheter aortic valve replacement is an established treatment for high-risk/inoperable patients with severe aortic stenosis. The SAPIEN XT is a balloon-expandable valve with enhanced features allowing delivery via a lower profile sheath. METHODS The SOURCE XT Registry is a prospective, multicenter, post-approval study. Data from 2,688 patients at 99 sites were analyzed. The main outcome measures were all-cause mortality, stroke, major vascular complications, bleeding, and pacemaker implantations at 30-days and 1 year post-procedure. RESULTS The mean age was 81.4 ± 6.6 years, 42.3% were male, and the mean logistic EuroSCORE (European System for Cardiac Operative Risk Evaluation) was 20.4 ± 12.4%. Patients had a high burden of coronary disease (44.2%), diabetes (29.4%), renal insufficiency (28.9%), atrial fibrillation (25.6%), and peripheral vascular disease (21.2%). Survival was 93.7% at 30 days and 80.6% at 1 year. At 30-day follow-up, the stroke rate was 3.6%, the rate of major vascular complications was 6.5%, the rate of life-threatening bleeding was 5.5%, the rate of new pacemakers was 9.5%, and the rate of moderate/severe paravalvular leak was 5.5%. Multivariable analysis identified nontransfemoral approach (hazard ratio [HR]: 1.84; p < 0.0001), renal insufficiency (HR: 1.53; p < 0.0001), liver disease (HR: 1.67; p = 0.0453), moderate/severe tricuspid regurgitation (HR: 1.47; p = 0.0019), porcelain aorta (HR: 1.47; p = 0.0352), and atrial fibrillation (HR: 1.41; p = 0.0014), with the highest HRs for 1-year mortality. Major vascular complications and major/life-threatening bleeding were the most frequently seen complications associated with a significant increase in 1-year mortality. CONCLUSIONS The SOURCE XT Registry demonstrated appropriate use of the SAPIEN XT THV in the first year post-commercialization in Europe. The safety profile is sustained, and clinical benefits have been established in the real-world setting. (SOURCE XT Registry; NCT01238497).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La lucha contra el cambio climático es uno de los retos ambientales más importantes de este siglo XXI. Para alcanzar el objetivo de reducir las emisiones de gases de efecto invernadero es necesario desarrollar herramientas aplicables a todas las actividades de la economía con las que medir el impacto generado por la actividad del hombre. La Huella de Carbono (HC) forma parte de un conjunto de indicadores desarrollados para responder a esta necesidad. Nuestra línea de trabajo parte del hecho de que la demanda de una baja HC puede ser un factor clave para estimular cambios en los hábitos de consumos y para mejorar la eficiencia en los procesos de producción. Sin embargo, una de las principales dificultades halladas al respecto es la diferencia de enfoques para el cálculo de la HC de producto y la HC de organización. De igual manera existen importantes dificultades en el establecimiento de los límites del sistema en estudio. Para asegurar el éxito de la implantación de la HC en la sociedad, es necesario el establecimiento de los mismos criterios en los distintos estudios. Por este camino, la comparabilidad esta comprometida y con ello la confianza del consumidor. Los avances en el cálculo de HC se basan en dos propuestas ampliamente conocidas: El Análisis de Ciclo de Vida y la Extensión Ambiental del Análisis Input-Output. Ambas metodologías tienen relevantes aspectos positivos y negativos. Por lo tanto, la hibridación entre ambos enfoques supone una clara oportunidad en la búsqueda de sinergias. En respuesta a esta demanda, diferentes herramientas de enfoque híbrido están siendo desarrolladas. La investigación de esta tesis doctoral parte del avance desarrollado en la concepción del Método Compuesto de las Cuentas Contables (MC3). El MC3 es un método de análisis híbrido por niveles que desarrolla un cálculo exhaustivo de la HC de organización para el posterior cálculo de la HC de producto. Esta investigación tiene como objetivo general evaluar el MC3 como herramienta de cálculo de la HC, válida tanto para organización como para producto. En este sentido, se analizan pormenorizadamente cuatro casos de estudios con características innovadoras. Tres de ellos empleando el MC3 en diferentes unidades de estudio: organización, producto y escenario internacional. La aplicación a organización se realiza sobre un centro universitario, permitiendo el análisis detallado de diferentes aspectos metodológicos. La aplicación a producto compara los resultados del MC3 con la aplicación tradicional de un Análisis de Ciclo de Vida. El escenario internacional se desarrolla en Brasil sobre la producción energética en un parque eólico de grandes dimensiones. Por último, el caso de estudio 4 se basa en la Extensión Ambiental del Análisis Multi-Region Input-Output. Este estudio elabora una nueva aproximación para el análisis del impacto generado por un hipotético cierre del comercio internacional. Estos estudios son discutidos en su conjunto a fin de poner en valor las fortalezas de las innovaciones implementadas con un sentido integrador. También se proponen estrategias futuras que permitan mejorar la propuesta metodológica del MC3 con el punto de mira puesto en la internacionalización y la armonización con los estándares internacionales de la HC. Según la experiencia desarrollada, el MC3 es un método de cálculo de la HC práctico y válido para evaluar la cantidad de emisiones directas e indirectas de gases de efecto invernadero de cualquier tipo de actividad. Una de las principales conclusiones es que el MC3 puede ser considerado una herramienta válida para el ecoetiquetado global de bienes y servicios que permita, tanto a empresas como a consumidores, funcionar como motores de cambios hacia una economía dinamizada por la búsqueda de la racionalización en el uso de los recursos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.