882 resultados para formation of large scale structure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyses local geographical contexts targeted by transnational large-scale land acquisitions (>200 ha per deal) in order to understand how emerging patterns of socio-ecological characteristics can be related to processes of large-scale foreign investment in land. Using a sample of 139 land deals georeferenced with high spatial accuracy, we first analyse their target contexts in terms of land cover, population density, accessibility, and indicators for agricultural potential. Three distinct patterns emerge from the analysis: densely populated and easily accessible croplands (35% of land deals); remote forestlands with lower population densities (34% of land deals); and moderately populated and moderately accessible shrub- or grasslands (26% of land deals). These patterns are consistent with processes described in the relevant case study literature, and they each involve distinct types of stakeholders and associated competition over land. We then repeat the often-cited analysis that postulates a link between land investments and target countries with abundant so-called “idle” or “marginal” lands as measured by yield gap and available suitable but uncultivated land; our methods differ from the earlier approach, however, in that we examine local context (10-km radius) rather than countries as a whole. The results show that earlier findings are disputable in terms of concepts, methods, and contents. Further, we reflect on methodologies for exploring linkages between socioecological patterns and land investment processes. Improving and enhancing large datasets of georeferenced land deals is an important next step; at the same time, careful choice of the spatial scale of analysis is crucial for ensuring compatibility between the spatial accuracy of land deal locations and the resolution of available geospatial data layers. Finally, we argue that new approaches and methods must be developed to empirically link socio-ecological patterns in target contexts to key determinants of land investment processes. This would help to improve the validity and the reach of our findings as an input for evidence-informed policy debates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasing commercial pressures on land are provoking fundamental and far-reaching changes in the relationships between people and land. Much knowledge on land-oriented investments projects currently comes from the media. Although this provides a good starting point, lack of transparency and rapidly changing contexts mean that this is often unreliable. The International Land Coalition, in partnership with Oxfam Novib, Centre de coopération internationale en recherche agronomique pour le développement (CIRAD), University of Pretoria, Centre for Development and Environment of the University of Bern (CDE), and GIZ, started to compile an inventory of land-related investments. This project aims to better understand the extent, trends and impacts of land-related investments by supporting an ongoing and systematic stocktaking exercise of the various investment projects currently taking place worldwide. It involves a large number of organizations and individuals working in areas where land transactions are being made, and able to provide details of such investments. The project monitors land transactions in rural areas that imply a transformation of land use rights from communities and smallholders to commercial use, and are made both by domestic and foreign investors (private actors, governments, government-back private investors). The focus is on investments for food or agrofuel production, timber extraction, carbon trading, mineral extraction, conservation and tourism. A novel way of using ITC to document land acquisitions in a spatially explicit way and by using an approach called “crowdsourcing” is being developed. This approach will allow actors to share information and knowledge directly and at any time on a public platform, where it will be scrutinized in terms of reliability and cross checked with other sources. Up to now, over 1200 deals have been recorded across 96 countries. Details of such transactions have been classified in a matrix and distributed to over 350 contacts worldwide for verification. The verified information has been geo-referenced and represented in two global maps. This is an open database enabling a continued monitoring exercise and the improvement of data accuracy. More information will be released over time. The opportunities arise from overcoming constraints by incomplete information by proposing a new way of collecting, enhancing and sharing information and knowledge in a more democratic and transparent manner. The intention is to develop interactive knowledge platform where any interested person can share and access information on land deals, their link to involved stakeholders, and their embedding into a geographical context. By making use of new ICT technologies that are more and more in the reach of local stakeholders, as well as open access and web-based spatial information systems, it will become possible to create a dynamic database containing spatial explicit data. Feeding in data by a large number of stakeholders, increasingly also by means of new mobile ITC technologies, will open up new opportunities to analyse, monitor and assess highly dynamic trends of land acquisition and rural transformation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter aims to overcome the gap existing between case study research, which typically provides qualitative and process-based insights, and national or global inventories that typically offer spatially explicit and quantitative analysis of broader patterns, and thus to present adequate evidence for policymaking regarding large-scale land acquisitions. Therefore, the chapter links spatial patterns of land acquisitions to underlying implementation processes of land allocation. Methodologically linking the described patterns and processes proved difficult, but we have identified indicators that could be added to inventories and monitoring systems to make linkage possible. Combining complementary approaches in this way may help to determine where policy space exists for more sustainable governance of land acquisitions, both geographically and with regard to processes of agrarian transitions. Our spatial analysis revealed two general patterns: (i) relatively large forestry-related acquisitions that target forested landscapes and often interfere with semi-subsistence farming systems; and (ii) smaller agriculture-related acquisitions that often target existing cropland and also interfere with semi-subsistence systems. Furthermore, our meta-analysis of land acquisition implementation processes shows that authoritarian, top-down processes dominate. Initially, the demands of powerful regional and domestic investors tend to override socio-ecological variables, local actors’ interests, and land governance mechanisms. As available land grows scarce, however, and local actors gain experience dealing with land acquisitions, it appears that land investments begin to fail or give way to more inclusive, bottom-up investment models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines how the geospatial accuracy of samples and sample size influence conclusions from geospatial analyses. It does so using the example of a study investigating the global phenomenon of large-scale land acquisitions and the socio-ecological characteristics of the areas they target. First, we analysed land deal datasets of varying geospatial accuracy and varying sizes and compared the results in terms of land cover, population density, and two indicators for agricultural potential: yield gap and availability of uncultivated land that is suitable for rainfed agriculture. We found that an increase in geospatial accuracy led to a substantial and greater change in conclusions about the land cover types targeted than an increase in sample size, suggesting that using a sample of higher geospatial accuracy does more to improve results than using a larger sample. The same finding emerged for population density, yield gap, and the availability of uncultivated land suitable for rainfed agriculture. Furthermore, the statistical median proved to be more consistent than the mean when comparing the descriptive statistics for datasets of different geospatial accuracy. Second, we analysed effects of geospatial accuracy on estimations regarding the potential for advancing agricultural development in target contexts. Our results show that the target contexts of the majority of land deals in our sample whose geolocation is known with a high level of accuracy contain smaller amounts of suitable, but uncultivated land than regional- and national-scale averages suggest. Consequently, the more target contexts vary within a country, the more detailed the spatial scale of analysis has to be in order to draw meaningful conclusions about the phenomena under investigation. We therefore advise against using national-scale statistics to approximate or characterize phenomena that have a local-scale impact, particularly if key indicators vary widely within a country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strict technical quality assurance procedures are essential for PV plant bankability. When large-scale PV plants are concerned, this is typically accomplished in three consecutive phases: an energy yield forecast, that is performed at the beginning of the project and is typically accomplished by means of a simulation exercise performed with dedicated software; a reception test campaign, that is performed at the end of the commissioning and consists of a set of tests for determining the efficiency and the reliability of the PV plant devices; and a performance analysis of the first years of operation, that consists in comparing the real energy production with the one calculated from the recorded operating conditions and taking into account the maintenance records. In the last six years, IES-UPM has offered both indoor and on-site quality control campaigns for more than 60 PV plants, with an accumulated power of more than 300 MW, in close contact with Engineering, Procurement and Construction Contractors and financial entities. This paper presents the lessons learned from such experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale circulations patterns (ENSO, NAO) have been shown to have a significant impact on seasonal weather, and therefore on crop yield over many parts of the world(Garnett and Khandekar, 1992; Aasa et al., 2004; Rozas and Garcia-Gonzalez, 2012). In this study, we analyze the influence of large-scale circulation patterns and regional climate on the principal components of maize yield variability in Iberian Peninsula (IP) using reanalysis datasets. Additionally, we investigate the modulation of these relationships by multidecadal patterns. This study is performed analyzing long time series of maize yield, only climate dependent, computed with the crop model CERES-maize (Jones and Kiniry, 1986) included in Decision Support System for Agrotechnology Transfer (DSSAT v.4.5).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con el auge del Cloud Computing, las aplicaciones de proceso de datos han sufrido un incremento de demanda, y por ello ha cobrado importancia lograr m�ás eficiencia en los Centros de Proceso de datos. El objetivo de este trabajo es la obtenci�ón de herramientas que permitan analizar la viabilidad y rentabilidad de diseñar Centros de Datos especializados para procesamiento de datos, con una arquitectura, sistemas de refrigeraci�ón, etc. adaptados. Algunas aplicaciones de procesamiento de datos se benefician de las arquitecturas software, mientras que en otras puede ser m�ás eficiente un procesamiento con arquitectura hardware. Debido a que ya hay software con muy buenos resultados en el procesamiento de grafos, como el sistema XPregel, en este proyecto se realizará una arquitectura hardware en VHDL, implementando el algoritmo PageRank de Google de forma escalable. Se ha escogido este algoritmo ya que podr��á ser m�ás eficiente en arquitectura hardware, debido a sus características concretas que se indicaráan m�ás adelante. PageRank sirve para ordenar las p�áginas por su relevancia en la web, utilizando para ello la teorí��a de grafos, siendo cada página web un vértice de un grafo; y los enlaces entre páginas, las aristas del citado grafo. En este proyecto, primero se realizará un an�álisis del estado de la técnica. Se supone que la implementaci�ón en XPregel, un sistema de procesamiento de grafos, es una de las m�ás eficientes. Por ello se estudiará esta �ultima implementaci�ón. Sin embargo, debido a que Xpregel procesa, en general, algoritmos que trabajan con grafos; no tiene en cuenta ciertas caracterí��sticas del algoritmo PageRank, por lo que la implementaci�on no es �optima. Esto es debido a que en PageRank, almacenar todos los datos que manda un mismo v�értice es un gasto innecesario de memoria ya que todos los mensajes que manda un vértice son iguales entre sí e iguales a su PageRank. Se realizará el diseño en VHDL teniendo en cuenta esta caracter��ística del citado algoritmo,evitando almacenar varias veces los mensajes que son iguales. Se ha elegido implementar PageRank en VHDL porque actualmente las arquitecturas de los sistemas operativos no escalan adecuadamente. Se busca evaluar si con otra arquitectura se obtienen mejores resultados. Se realizará un diseño partiendo de cero, utilizando la memoria ROM de IPcore de Xillinx (Software de desarrollo en VHDL), generada autom�áticamente. Se considera hacer cuatro tipos de módulos para que as�� el procesamiento se pueda hacer en paralelo. Se simplificar�á la estructura de XPregel con el fin de intentar aprovechar la particularidad de PageRank mencionada, que hace que XPregel no le saque el m�aximo partido. Despu�és se escribirá el c�ódigo, realizando una estructura escalable, ya que en la computación intervienen millones de páginas web. A continuación, se sintetizar�á y se probará el código en una FPGA. El �ultimo paso será una evaluaci�ón de la implementaci�ón, y de posibles mejoras en cuanto al consumo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.