882 resultados para large scale data gathering
Resumo:
Well-known data mining algorithms rely on inputs in the form of pairwise similarities between objects. For large datasets it is computationally impossible to perform all pairwise comparisons. We therefore propose a novel approach that uses approximate Principal Component Analysis to efficiently identify groups of similar objects. The effectiveness of the approach is demonstrated in the context of binary classification using the supervised normalized cut as a classifier. For large datasets from the UCI repository, the approach significantly improves run times with minimal loss in accuracy.
Resumo:
The link between high precipitation in Dronning Maud Land (DML), Antarctica, and the large-scale atmospheric circulation is investigated using ERA-Interim data for 1979–2009. High-precipitation events are analyzed at Halvfarryggen situated in the coastal region of DML and at Kohnen Station located in its interior. This study further includes a comprehensive comparison of high precipitation in ERA-Interim with precipitation data from the Antarctic Mesoscale Prediction System (AMPS) and snow accumulation measurements from automatic weather stations (AWSs), with the limitations of such a comparison being discussed. The ERA-Interim and AMPS precipitation data agree very well. However, the correspondence between high precipitation in ERA-Interim and high snow accumulation at the AWSs is relatively weak. High-precipitation events at both Halvfarryggen and Kohnen are typically associated with amplified upper level waves. This large-scale atmospheric flow pattern is preceded by the downstream development of a Rossby wave train from the eastern South Pacific several days before the precipitation event. At the surface, a cyclone located over the Weddell Sea is the main synoptic ingredient for high precipitation both at Halvfarryggen and at Kohnen. A blocking anticyclone downstream is not a requirement for high precipitation per se, but a larger share of blocking occurrences during the highest-precipitation days in DML suggests that these blocks strengthen the vertically integrated water vapor transport (IVT) into DML. A strong link between high precipitation and the IVT perpendicular to the local orography suggests that IVT could be used as a “proxy” for high precipitation, in particular over DML's interior.
Resumo:
This paper analyses local geographical contexts targeted by transnational large-scale land acquisitions (>200 ha per deal) in order to understand how emerging patterns of socio-ecological characteristics can be related to processes of large-scale foreign investment in land. Using a sample of 139 land deals georeferenced with high spatial accuracy, we first analyse their target contexts in terms of land cover, population density, accessibility, and indicators for agricultural potential. Three distinct patterns emerge from the analysis: densely populated and easily accessible croplands (35% of land deals); remote forestlands with lower population densities (34% of land deals); and moderately populated and moderately accessible shrub- or grasslands (26% of land deals). These patterns are consistent with processes described in the relevant case study literature, and they each involve distinct types of stakeholders and associated competition over land. We then repeat the often-cited analysis that postulates a link between land investments and target countries with abundant so-called “idle” or “marginal” lands as measured by yield gap and available suitable but uncultivated land; our methods differ from the earlier approach, however, in that we examine local context (10-km radius) rather than countries as a whole. The results show that earlier findings are disputable in terms of concepts, methods, and contents. Further, we reflect on methodologies for exploring linkages between socioecological patterns and land investment processes. Improving and enhancing large datasets of georeferenced land deals is an important next step; at the same time, careful choice of the spatial scale of analysis is crucial for ensuring compatibility between the spatial accuracy of land deal locations and the resolution of available geospatial data layers. Finally, we argue that new approaches and methods must be developed to empirically link socio-ecological patterns in target contexts to key determinants of land investment processes. This would help to improve the validity and the reach of our findings as an input for evidence-informed policy debates.
Resumo:
Increasing commercial pressures on land are provoking fundamental and far-reaching changes in the relationships between people and land. Much knowledge on land-oriented investments projects currently comes from the media. Although this provides a good starting point, lack of transparency and rapidly changing contexts mean that this is often unreliable. The International Land Coalition, in partnership with Oxfam Novib, Centre de coopération internationale en recherche agronomique pour le développement (CIRAD), University of Pretoria, Centre for Development and Environment of the University of Bern (CDE), and GIZ, started to compile an inventory of land-related investments. This project aims to better understand the extent, trends and impacts of land-related investments by supporting an ongoing and systematic stocktaking exercise of the various investment projects currently taking place worldwide. It involves a large number of organizations and individuals working in areas where land transactions are being made, and able to provide details of such investments. The project monitors land transactions in rural areas that imply a transformation of land use rights from communities and smallholders to commercial use, and are made both by domestic and foreign investors (private actors, governments, government-back private investors). The focus is on investments for food or agrofuel production, timber extraction, carbon trading, mineral extraction, conservation and tourism. A novel way of using ITC to document land acquisitions in a spatially explicit way and by using an approach called “crowdsourcing” is being developed. This approach will allow actors to share information and knowledge directly and at any time on a public platform, where it will be scrutinized in terms of reliability and cross checked with other sources. Up to now, over 1200 deals have been recorded across 96 countries. Details of such transactions have been classified in a matrix and distributed to over 350 contacts worldwide for verification. The verified information has been geo-referenced and represented in two global maps. This is an open database enabling a continued monitoring exercise and the improvement of data accuracy. More information will be released over time. The opportunities arise from overcoming constraints by incomplete information by proposing a new way of collecting, enhancing and sharing information and knowledge in a more democratic and transparent manner. The intention is to develop interactive knowledge platform where any interested person can share and access information on land deals, their link to involved stakeholders, and their embedding into a geographical context. By making use of new ICT technologies that are more and more in the reach of local stakeholders, as well as open access and web-based spatial information systems, it will become possible to create a dynamic database containing spatial explicit data. Feeding in data by a large number of stakeholders, increasingly also by means of new mobile ITC technologies, will open up new opportunities to analyse, monitor and assess highly dynamic trends of land acquisition and rural transformation.
Resumo:
Despite an increased scientific interest in the relatively new phenomenon of large-scale land acquisition (LSLA), data on processes on the local level remain sparse and superficial. However, knowledge about the concrete implementation of LSLA projects and the different impacts they have on the heterogeneous group of project affected people is indispensable for a deepened understanding of the phenomenon. In order to address this research gap, a team of two anthropologists and a human geographer conducted in-depth fieldwork on the LSLA project of Swiss based Addax Bioenergy in Sierra Leone. After the devastating civil war, the Sierra Leonean government created favourable conditions for foreign investors willing to lease large areas of land and to bring “development” to the country. Being one of the numerous investing companies, Addax Bioenergy has leased 57’000 hectares of land to develop a sugarcane plantation and an ethanol factory to produce biofuel for the export to the European market. Based on participatory observation, qualitative interview techniques and a network analysis, the research team aimed a) at identifying the different actors that were necessary for the implementation of this project on a vertical level and b) exploring various impacts of the project in the local context of two villages on a horizontal level. The network analysis reveals a complex pattern of companies, institutions, nongovernmental organisations and prominent personalities acting within a shifting technological and discursive framework linking global scales to a unique local context. Findings from the latter indicate that affected people initially welcomed the project but now remain frustrated since many promises and expectations have not been fulfilled. Although some local people are able to benefit from the project, the loss of natural resources that comes along with the land lease affects livelihoods of vulnerable groups – especially women and land users – considerably. However, this research doesn’t only disclose impacts on local people’s previous lives but also addresses strategies they adopt in the newly created situation that has opened up alternative spaces for renegotiations of power and legitimatisation. Therewith, this explorative study reveals new aspects of LSLA that have not been considered adequately by the investing company nor by the general academic discourse on LSLA.
Resumo:
Despite increased scientific interest in the phenomenon of large-scale land acquisitions (LSLA), accurate data on implementation processes remain sparse. This paper aims at filling this gap by providing empirical in-depth knowledge on the case of the Swiss-based Addax Bioenergy Ltd. in Sierra Leone. Extensive fieldwork allowed the interdisciplinary research team 1) the identification of different actors that are necessary for the implementation on a vertical level and 2) the documentation of the heterogeneous group of project affected people’s perceptions and strategies on a horizontal level. Findings reveal that even a project labeled as best-practice example by UN agencies triggers a number of problematic processes for affected communities. The loss of natural resources that comes along with the land lease and the lack of employment possibilities mostly affects already vulnerable groups. On the other hand, strategies and resistance of local people also affect the project implementation. This shows that the horizontal and vertical levels are not separate entities. They are linked by social networks, social interactions, and means of communication and both levels take part in shaping the project’s impacts.
Cerebellar mechanisms for motor learning: Testing predictions from a large-scale computer simulation
Resumo:
The cerebellum is the major brain structure that contributes to our ability to improve movements through learning and experience. We have combined computer simulations with behavioral and lesion studies to investigate how modification of synaptic strength at two different sites within the cerebellum contributes to a simple form of motor learning—Pavlovian conditioning of the eyelid response. These studies are based on the wealth of knowledge about the intrinsic circuitry and physiology of the cerebellum and the straightforward manner in which this circuitry is engaged during eyelid conditioning. Thus, our simulations are constrained by the well-characterized synaptic organization of the cerebellum and further, the activity of cerebellar inputs during simulated eyelid conditioning is based on existing recording data. These simulations have allowed us to make two important predictions regarding the mechanisms underlying cerebellar function, which we have tested and confirmed with behavioral studies. The first prediction describes the mechanisms by which one of the sites of synaptic modification, the granule to Purkinje cell synapses (gr → Pkj) of the cerebellar cortex, could generate two time-dependent properties of eyelid conditioning—response timing and the ISI function. An empirical test of this prediction using small, electrolytic lesions of the cerebellar cortex revealed the pattern of results predicted by the simulations. The second prediction made by the simulations is that modification of synaptic strength at the other site of plasticity, the mossy fiber to deep nuclei synapses (mf → nuc), is under the control of Purkinje cell activity. The analysis predicts that this property should confer mf → nuc synapses with resistance to extinction. Thus, while extinction processes erase plasticity at the first site, residual plasticity at mf → nuc synapses remains. The residual plasticity at the mf → nuc site confers the cerebellum with the capability for rapid relearning long after the learned behavior has been extinguished. We confirmed this prediction using a lesion technique that reversibly disconnected the cerebellar cortex at various stages during extinction and reacquisition of eyelid responses. The results of these studies represent significant progress toward a complete understanding of how the cerebellum contributes to motor learning. ^
Resumo:
Coastal communities around the world face increasing risk from flooding as a result of rising sea level, increasing storminess, and land subsidence. Salt marshes can act as natural buffer zones, providing protection from waves during storms. However, the effectiveness of marshes in protecting the coastline during extreme events when water levels and waves are highest is poorly understood. Here, we experimentally assess wave dissipation under storm surge conditions in a 300-m-long wave flume that contains a transplanted section of natural salt marsh. We find that the presence of marsh vegetation causes considerable wave attenuation, even when water levels and waves are high. From a comparison with experiments without vegetation, we estimate that up to 60% of observed wave reduction is attributed to vegetation. We also find that although waves progressively flatten and break vegetation stems and thereby reduce dissipation, the marsh substrate remained remarkably stable and resistant to surface erosion under all conditions.The effectiveness of storm wave dissipation and the resilience of tidal marshes even at extreme conditions suggest that salt marsh ecosystems can be a valuable component of coastal protection schemes.
Resumo:
The variable nature of the irradiance can produce significant fluctuations in the power generated by large grid-connected photovoltaic (PV) plants. Experimental 1 s data were collected throughout a year from six PV plants, 18 MWp in total. Then, the dependence of short (below 10 min) power fluctuation on PV plant size has been investigated. The analysis focuses on the study of fluctuation frequency as well as the maximum fluctuation value registered. An analytic model able to describe the frequency of a given fluctuation for a certain day is proposed
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
The function of many of the uncharacterized open reading frames discovered by genomic sequencing can be determined at the level of expressed gene products, the proteome. However, identifying the cognate gene from minute amounts of protein has been one of the major problems in molecular biology. Using yeast as an example, we demonstrate here that mass spectrometric protein identification is a general solution to this problem given a completely sequenced genome. As a first screen, our strategy uses automated laser desorption ionization mass spectrometry of the peptide mixtures produced by in-gel tryptic digestion of a protein. Up to 90% of proteins are identified by searching sequence data bases by lists of peptide masses obtained with high accuracy. The remaining proteins are identified by partially sequencing several peptides of the unseparated mixture by nanoelectrospray tandem mass spectrometry followed by data base searching with multiple peptide sequence tags. In blind trials, the method led to unambiguous identification in all cases. In the largest individual protein identification project to date, a total of 150 gel spots—many of them at subpicomole amounts—were successfully analyzed, greatly enlarging a yeast two-dimensional gel data base. More than 32 proteins were novel and matched to previously uncharacterized open reading frames in the yeast genome. This study establishes that mass spectrometry provides the required throughput, the certainty of identification, and the general applicability to serve as the method of choice to connect genome and proteome.