864 resultados para Local-Global topics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present global era in which firms choose the location of their plants beyond national borders, location characteristics are important for attracting multinational enterprises (MNEs). The better access to countries with large market is clearly attractive for MNEs. For example, special treatments on tariffs such as the Generalized System of Preferences (GSP) are beneficial for MNEs whose home country does not have such treatments. Not only such country characteristics but also region characteristics (i.e. province-level or city-level ones) matter, particularly in the case that location characteristics differ widely between a nation's regions. The existence of industrial concentration, that is, agglomeration, is a typical regional characteristic. It is with consideration of these country-level and region-level characteristics that MNEs decide their location abroad. A large number of academic studies have investigated in what kinds of countries MNEs locate, i.e. location choice analysis. Employing the usual new economic geography model (i.e. constant elasticity of substitution (CES) utility function, Dixit-Stiglitz monopolistic competition, and ice-berg trade costs), the literature derives the profit function, of which coefficients are estimated using maximum likelihood procedures. Recent studies are as follows: Head, Rise, and Swenson (1999) for Japanese MNEs in the US; Belderbos and Carree (2002) for Japanese MNEs in China; Head and Mayer (2004) for Japanese MNEs in Europe; Disdier and Mayer (2004) for French MNEs in Europe; Castellani and Zanfei (2004) for large MNEs worldwide; Mayer, Mejean, and Nefussi (2007) for French MNEs worldwide; Crozet, Mayer, and Mucchielli (2004) for MNEs in France; and Basile, Castellani, and Zanfei (2008) for MNEs in Europe. At the present time, three main topics can be found in this literature. The first introduces various location elements as independent variables. The above-mentioned new economic geography model usually yields the profit function, which is a function of market size, productive factor prices, price of intermediate goods, and trade costs. As a proxy for the price of intermediate goods, the measure of agglomeration is often used, particularly the number of manufacturing firms. Some studies employ more disaggregated numbers of manufacturing firms, such as the number of manufacturing firms with the same nationality as the firms choosing the location (e.g., Head et al., 1999; Crozet et al., 2004) or the number of firms belonging to the same firm group (e.g., Belderbos and Carree, 2002). As part of trade costs, some investment climate measures have been examined: free trade zones in the US (Head et al., 1999), special economic zones and opening coastal cities in China (Belderbos and Carree, 2002), and Objective 1 structural funds and cohesion funds in Europe (Basile et al., 2008). Second, the validity of proxy variables for location elements is further examined. Head and Mayer (2004) examine the validity of market potential on location choice. They propose the use of two measures: the Harris market potential index (Harris, 1954) and the Krugman-type index used in Redding and Venables (2004). The Harris-type index is simply the sum of distance-weighted real GDP. They employ the Krugman-type market potential index, which is directly derived from the new economic geography model, as it takes into account the extent of competition (i.e. price index) and is constructed using estimators of importing country dummy variables in the well-known gravity equation, as in Redding and Venables (2004). They find that "theory does not pay", in the sense that the Harris market potential outperforms Krugman's market potential in both the magnitude of its coefficient and the fit of the model to be estimated. The third topic explores the substitution of location by examining inclusive values in the nested-logit model. For example, using firm-level data on French investments both in France and abroad over the 1992-2002 period, Mayer et al. (2007) investigate the determinants of location choice and assess empirically whether the domestic economy has been losing attractiveness over the recent period or not. The estimated coefficient for inclusive value is strongly significant and near unity, indicating that the national economy is not different from the rest of the world in terms of substitution patterns. Similarly, Disdier and Mayer (2004) investigate whether French MNEs consider Western and Eastern Europe as two distinct groups of potential host countries by examining the coefficient for the inclusive value in nested-logit estimation. They confirm the relevance of an East-West structure in the country location decision and furthermore show that this relevance decreases over time. The purpose of this paper is to investigate the location choice of Japanese MNEs in Thailand, Cambodia, Laos, Myanmar, and Vietnam, and is closely related to the third topic mentioned above. By examining region-level location choice with the nested-logit model, I investigate the relative importance of not only country characteristics but also region characteristics. Such investigation is invaluable particularly in the case of location choice in those five countries: industrialization remains immature in those countries which have not yet succeeded in attracting enough MNEs, and as a result, it is expected that there are not yet crucial regional variations for MNEs within such a nation, meaning the country characteristics are still relatively important to attract MNEs. To illustrate, in the case of Cambodia and Laos, one of the crucial elements for Japanese MNEs would be that LDC preferential tariff schemes are available for exports from Cambodia and Laos. On the other hand, in the case of Thailand and Vietnam, which have accepted a relatively large number of MNEs and thus raised the extent of regional inequality, regional characteristics such as the existence of agglomeration would become important elements in location choice. Our sample countries seem, therefore, to offer rich variations for analyzing the relative importance between country characteristics and region characteristics. Our empirical strategy has a further advantage. As in the third topic in the location choice literature, the use of the nested-logit model enables us to examine substitution patterns between country-based and region-based location decisions by MNEs in the concerned countries. For example, it is possible to investigate empirically whether Japanese multinational firms consider Thailand/Vietnam and the other three countries as two distinct groups of potential host countries, by examining the inclusive value parameters in nested-logit estimation. In particular, our sample countries all experienced dramatic changes in, for example, economic growth or trade costs reduction during the sample period. Thus, we will find the dramatic dynamics of such substitution patterns. Our rigorous analysis of the relative importance between country characteristics and region characteristics is invaluable from the viewpoint of policy implications. First, while the former characteristics should be improved mainly by central government in each country, there is sometimes room for the improvement of the latter characteristics by even local governments or smaller institutions such as private agencies. Consequently, it becomes important for these smaller institutions to know just how crucial the improvement of region characteristics is for attracting foreign companies. Second, as economies grow, country characteristics become similar among countries. For example, the LCD preferential tariff schemes are available only when a country is less developed. Therefore, it is important particularly for the least developed countries to know what kinds of regional characteristics become important following economic growth; in other words, after their country characteristics become similar to those of the more developed countries. I also incorporate one important characteristic of MNEs, namely, productivity. The well-known Helpman-Melitz-Yeaple model indicates that only firms with higher productivity can afford overseas entry (Helpman et al., 2004). Beyond this argument, there may be some differences in MNEs' productivity among our sample countries and regions. Such differences are important from the viewpoint of "spillover effects" from MNEs, which are one of the most important results for host countries in accepting their entry. The spillover effects are that the presence of inward foreign direct investment (FDI) aises domestic firms' productivity through various channels such as imitation. Such positive effects might be larger in areas with more productive MNEs. Therefore, it becomes important for host countries to know how much productive firms are likely to invest in them. The rest of this paper is organized as follows. Section 2 takes a brief look at the worldwide distribution of Japanese overseas affiliates. Section 3 provides an empirical model to examine their location choice, and lastly, we discuss future works to estimate our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of powerful partial evaluation methods into practical compilers for logic programs is still far from reality. This is related both to 1) efficiency issues and to 2) the complications of dealing with practical programs. Regarding efnciency, the most successful unfolding rules used nowadays are based on structural orders applied over (covering) ancestors, i.e., a subsequence of the atoms selected during a derivation. Unfortunately, maintaining the structure of the ancestor relation during unfolding introduces significant overhead. We propose an efficient, practical local unfolding rule based on the notion of covering ancestors which can be used in combination with any structural order and allows a stack-based implementation without losing any opportunities for specialization. Regarding the second issue, we propose assertion-based techniques which allow our approach to deal with real programs that include (Prolog) built-ins and external predicates in a very extensible manner. Finally, we report on our implementation of these techniques in a practical partial evaluator, embedded in a state of the art compiler which uses global analysis extensively (the Ciao compiler and, specifically, its preprocessor CiaoPP). The performance analysis of the resulting system shows that our techniques, in addition to dealing with practical programs, are also significantly more efficient in time and somewhat more efficient in memory than traditional tree-based implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study of the effectiveness of global analysis in the parallelization of logic programs using strict independence. A number of well-known approximation domains are selected and tlieir usefulness for the application in hand is explained. Also, methods for using the information provided by such domains to improve parallelization are proposed. Local and global analyses are built using these domains and such analyses are embedded in a complete parallelizing compiler. Then, the performance of the domains (and the system in general) is assessed for this application through a number of experiments. We argüe that the results offer significant insight into the characteristics of these domains, the demands of the application, and the tradeoffs involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los estudios realizados hasta el momento para la determinación de la calidad de medida del instrumental geodésico han estado dirigidos, fundamentalmente, a las medidas angulares y de distancias. Sin embargo, en los últimos años se ha impuesto la tendencia generalizada de utilizar equipos GNSS (Global Navigation Satellite System) en el campo de las aplicaciones geomáticas sin que se haya establecido una metodología que permita obtener la corrección de calibración y su incertidumbre para estos equipos. La finalidad de esta Tesis es establecer los requisitos que debe satisfacer una red para ser considerada Red Patrón con trazabilidad metrológica, así como la metodología para la verificación y calibración de instrumental GNSS en redes patrón. Para ello, se ha diseñado y elaborado un procedimiento técnico de calibración de equipos GNSS en el que se han definido las contribuciones a la incertidumbre de medida. El procedimiento, que se ha aplicado en diferentes redes para distintos equipos, ha permitido obtener la incertidumbre expandida de dichos equipos siguiendo las recomendaciones de la Guide to the Expression of Uncertainty in Measurement del Joint Committee for Guides in Metrology. Asimismo, se han determinado mediante técnicas de observación por satélite las coordenadas tridimensionales de las bases que conforman las redes consideradas en la investigación, y se han desarrollado simulaciones en función de diversos valores de las desviaciones típicas experimentales de los puntos fijos que se han utilizado en el ajuste mínimo cuadrático de los vectores o líneas base. Los resultados obtenidos han puesto de manifiesto la importancia que tiene el conocimiento de las desviaciones típicas experimentales en el cálculo de incertidumbres de las coordenadas tridimensionales de las bases. Basándose en estudios y observaciones de gran calidad técnica, llevados a cabo en estas redes con anterioridad, se ha realizado un exhaustivo análisis que ha permitido determinar las condiciones que debe satisfacer una red patrón. Además, se han diseñado procedimientos técnicos de calibración que permiten calcular la incertidumbre expandida de medida de los instrumentos geodésicos que proporcionan ángulos y distancias obtenidas por métodos electromagnéticos, ya que dichos instrumentos son los que van a permitir la diseminación de la trazabilidad metrológica a las redes patrón para la verificación y calibración de los equipos GNSS. De este modo, ha sido posible la determinación de las correcciones de calibración local de equipos GNSS de alta exactitud en las redes patrón. En esta Tesis se ha obtenido la incertidumbre de la corrección de calibración mediante dos metodologías diferentes; en la primera se ha aplicado la propagación de incertidumbres, mientras que en la segunda se ha aplicado el método de Monte Carlo de simulación de variables aleatorias. El análisis de los resultados obtenidos confirma la validez de ambas metodologías para la determinación de la incertidumbre de calibración de instrumental GNSS. ABSTRACT The studies carried out so far for the determination of the quality of measurement of geodetic instruments have been aimed, primarily, to measure angles and distances. However, in recent years it has been accepted to use GNSS (Global Navigation Satellite System) equipment in the field of Geomatic applications, for data capture, without establishing a methodology that allows obtaining the calibration correction and its uncertainty. The purpose of this Thesis is to establish the requirements that a network must meet to be considered a StandardNetwork with metrological traceability, as well as the methodology for the verification and calibration of GNSS instrumental in those standard networks. To do this, a technical calibration procedure has been designed, developed and defined for GNSS equipment determining the contributions to the uncertainty of measurement. The procedure, which has been applied in different networks for different equipment, has alloweddetermining the expanded uncertainty of such equipment following the recommendations of the Guide to the Expression of Uncertainty in Measurement of the Joint Committee for Guides in Metrology. In addition, the three-dimensional coordinates of the bases which constitute the networks considered in the investigationhave been determined by satellite-based techniques. There have been several developed simulations based on different values of experimental standard deviations of the fixed points that have been used in the least squares vectors or base lines calculations. The results have shown the importance that the knowledge of experimental standard deviations has in the calculation of uncertainties of the three-dimensional coordinates of the bases. Based on high technical quality studies and observations carried out in these networks previously, it has been possible to make an exhaustive analysis that has allowed determining the requirements that a standard network must meet. In addition, technical calibration procedures have been developed to allow the uncertainty estimation of measurement carried outby geodetic instruments that provide angles and distances obtained by electromagnetic methods. These instruments provide the metrological traceability to standard networks used for verification and calibration of GNSS equipment. As a result, it has been possible the estimation of local calibration corrections for high accuracy GNSS equipment in standardnetworks. In this Thesis, the uncertainty of calibration correction has been calculated using two different methodologies: the first one by applying the law of propagation of uncertainty, while the second has applied the propagation of distributions using the Monte Carlo method. The analysis of the obtained results confirms the validity of both methodologies for estimating the calibration uncertainty of GNSS equipment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the dimensional synthesis of a spherical Parallel Manipulator (PM) with a -1S kinematic chain is presented. The goal of the synthesis is to find a set of parameters that defines the PM with the best performance in terms of workspace capabilities, dexterity and isotropy. The PM is parametrized in terms of a reference element, and a non-directed search of these parameters is carried out. First, the inverse kinematics and instantaneous kinematics of the mechanism are presented. The latter is found using the screw theory formulation. An algorithm that explores a bounded set of parameters and determines the corresponding value of global indexes is presented. The concepts of a novel global performance index and a compound index are introduced. Simulation results are shown and discussed. The best PMs found in terms of each performance index evaluated are locally analyzed in terms of its workspace and local dexterity. The relationship between the performance of the PM and its parameters is discussed, and a prototype with the best performance in terms of the compound index is presented and analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A unified solution framework is presented for one-, two- or three-dimensional complex non-symmetric eigenvalue problems, respectively governing linear modal instability of incompressible fluid flows in rectangular domains having two, one or no homogeneous spatial directions. The solution algorithm is based on subspace iteration in which the spatial discretization matrix is formed, stored and inverted serially. Results delivered by spectral collocation based on the Chebyshev-Gauss-Lobatto (CGL) points and a suite of high-order finite-difference methods comprising the previously employed for this type of work Dispersion-Relation-Preserving (DRP) and Padé finite-difference schemes, as well as the Summationby- parts (SBP) and the new high-order finite-difference scheme of order q (FD-q) have been compared from the point of view of accuracy and efficiency in standard validation cases of temporal local and BiGlobal linear instability. The FD-q method has been found to significantly outperform all other finite difference schemes in solving classic linear local, BiGlobal, and TriGlobal eigenvalue problems, as regards both memory and CPU time requirements. Results shown in the present study disprove the paradigm that spectral methods are superior to finite difference methods in terms of computational cost, at equal accuracy, FD-q spatial discretization delivering a speedup of ð (10 4). Consequently, accurate solutions of the three-dimensional (TriGlobal) eigenvalue problems may be solved on typical desktop computers with modest computational effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solar radiation is the most important source of renewable energy in the planet; it's important to solar engineers, designers and architects, and it's also fundamental for efficiently determining irrigation water needs and potential yield of crops, among others. Complete and accurate solar radiation data at a specific region are indispensable. For locations where measured values are not available, several models have been developed to estimate solar radiation. The objective of this paper was to calibrate, validate and compare five representative models to predict global solar radiation, adjusting the empirical coefficients to increase the local applicability and to develop a linear model. All models were based on easily available meteorological variables, without sunshine hours as input, and were used to estimate the daily solar radiation at Cañada de Luque (Córdoba, Argentina). As validation, measured and estimated solar radiation data were analyzed using several statistic coefficients. The results showed that all the analyzed models were robust and accurate (R2 and RMSE values between 0.87 to 0.89 and 2.05 to 2.14, respectively), so global radiation can be estimated properly with easily available meteorological variables when only temperature data are available. Hargreaves-Samani, Allen and Bristow-Campbell models could be used with typical values to estimate solar radiation while Samani and Almorox models should be applied with calibrated coefficients. Although a new linear model presented the smallest R2 value (R2 = 0.87), it could be considered useful for its easy application. The daily global solar radiation values produced for these models can be used to estimate missing daily values, when only temperature data are available, and in hydrologic or agricultural applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accuracy estimates and adaptive refinements is nowadays one of the main research topics in finite element computations [6,7,8, 9,11].Its extension to Boundary Elements has been tried as a means to better understand its capabilities as well as to impro ve its efficiency and its obvious advantages. The possibility of implementing adaptive techniques was shown [1,2] for h-conver gence and p-convergence respectively. Some posterior works [3,4 5,10] have shown the promising results that can be expected from those techniques. The main difficulty is associated to the reasonable establishment of “estimation” and “indication” factors related to the global and local errors in each refinement. Although some global measures have been used it is clear that the reduction in dimension intrinsic to boundary elements (3D→2D: 2D→1D) could allow a direct comparison among residuals using the graphic possibilities of modern computers and allowing a point-to-point comparison in place of the classical global approaches. Nevertheless an indicator generalizing the well known Peano’s one has been produced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The linear instability of the three-dimensional boundary-layer over the HIFiRE-5 flight test geometry, i.e. a rounded-tip 2:1 elliptic cone, at Mach 7, has been analyzed through spatial BiGlobal analysis, in a effort to understand transition and accurately predict local heat loads on next-generation ight vehicles. The results at an intermediate axial section of the cone, Re x = 8x10 5, show three different families of spatially amplied linear global modes, the attachment-line and cross- ow modes known from earlier analyses, and a new global mode, peaking in the vicinity of the minor axis of the cone, termed \center-line mode". We discover that a sequence of symmetric and anti-symmetric centerline modes exist and, for the basic ow at hand, are maximally amplied around F* = 130kHz. The wavenumbers and spatial distribution of amplitude functions of the centerline modes are documented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Examples of global solutions of the shell equations are presented, such as the ones based on the well known Levy series expansion. Also discussed are some natural extensions of the Levy method as well as the inherent limitations of these methods concerning the shell model assumptions, boundary conditions and geometric regularity. Finally, some open additional design questions are noted mainly related to the simultaneous use in analysis of these global techniques and the local methods (like the finite elements) to finding the optimal shell shape, and to determining the reinforcement layout.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study a system of partial differential equations describing the evolution of a population under chemotactic effects with non-local reaction terms. We consider an external application of chemoattractant in the system and study the cases of one and two populations in competition. By introducing global competitive/cooperative factors in terms of the total mass of the populations, weobtain, forarangeofparameters, thatanysolutionwithpositive and bounded initial data converges to a spatially homogeneous state with positive components. The proofs rely on the maximum principle for spatially homogeneous sub- and super-solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small changes in agricultural practices have a large potential for reducing greenhouse gas emissions. However, the implementation of such practices at the local level is often limited by a range of barriers. Understanding the barriers is essential for defining effective measures, the actual mitigation potential of the measures, and the policy needs to ensure implementation. Here we evaluate behavioural, cultural, and policy barriers for implementation of mitigation practices at the local level that imply small changes to farmers. The choice of potential mitigation practices relevant to the case study is based on a literature review of previous empirical studies. Two methods that include the stakeholders? involvement (experts and farmers) are undertaken for the prioritization of these potential practices: (a) Multi-criteria analysis (MCA) of the choices of an expert panel and (b) Analysis of barriers to implementation based on a survey of farmers. The MCA considers two future climate scenarios ? current climate and a drier and warmer climate scenario. Results suggest that all potential selected practices are suitable for mitigation considering multiple criteria in both scenarios. Nevertheless, if all the barriers for implementation had the same influence, the preferred mitigation practices in the case study would be changes in fertilization management and use of cover crops. The identification of barriers for the implementation of the practices is based on the econometric analysis of surveys given to farmers. Results show that farmers? environmental concerns, financial incentives and access to technical advice are the main factors that define their barriers to implementation. These results may contribute to develop effective mitigation policy to be included in the 2020 review of the European Union Common Agricultural Policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El desarrollo rural/local requiere personas capaces de asumir la gestión de sus actividades integrándolas con una visión global de desarrollo del territorio que habitan. La escuela constituye su primer ámbito de formación y por tanto es el elemento clave en el desarrollo futuro de las personas. Ésta ha perdido terreno como constructora de significados en los últimos años; ha asistido a la sistemática destitución de sus condiciones de generar capacidades de simbolización, lo que provoca una anomia constante en su conjunto. La realidad se le presenta de manera fragmentada y no es capaz de reconstituir un orden lógico y simbólico que contenga y retenga al alumnado ni tampoco se encuentra con capacidad para estructurar el cambio, factor fundamental del desarrollo rural/local. Se presenta propuesta de acción conjunta aplicable a la Provincia de Buenos Aires (Argentina), con un instrumento integrador: el huerto familiar. Con ella se pretende dar un salto cualitativo a las dificultades diagnosticadas y aportar elementos para la reconstrucción de una representación compartida de la realidad basada en una educación que a su vez sea emancipadora y productora de conocimientos que preparen para el protagonismo activo del individuo.