859 resultados para Large-scale gradient


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing availability of large, detailed digital representations of the Earth’s surface demands the application of objective and quantitative analyses. Given recent advances in the understanding of the mechanisms of formation of linear bedform features from a range of environments, objective measurement of their wavelength, orientation, crest and trough positions, height and asymmetry is highly desirable. These parameters are also of use when determining observation-based parameters for use in many applications such as numerical modelling, surface classification and sediment transport pathway analysis. Here, we (i) adapt and extend extant techniques to provide a suite of semi-automatic tools which calculate crest orientation, wavelength, height, asymmetry direction and asymmetry ratios of bedforms, and then (ii) undertake sensitivity tests on synthetic data, increasingly complex seabeds and a very large-scale (39 000km2) aeolian dune system. The automated results are compared with traditional, manually derived,measurements at each stage. This new approach successfully analyses different types of topographic data (from aeolian and marine environments) from a range of sources, with tens of millions of data points being processed in a semi-automated and objective manner within minutes rather than hours or days. The results from these analyses show there is significant variability in all measurable parameters in what might otherwise be considered uniform bedform fields. For example, the dunes of the Rub’ al Khali on the Arabian peninsula are shown to exhibit deviations in dimensions from global trends. Morphological and dune asymmetry analysis of the Rub’ al Khali suggests parts of the sand sea may be adjusting to a changed wind regime from that during their formation 100 to 10 ka BP.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Installed wind capacity in the European Union is expected to continue to increase due to renewable energy targets and obligations to reduce greenhouse gas emissions. Renewable energy sources such as wind power are variable sources of power. Energy storage technologies are useful to manage the issues associated with variable renewable energy sources and align non-dispatchable renewable energy generation with load demands. Energy storage technologies can play different roles in electric power systems and can be used in each of the steps of the electric power supply chain. Moreover, large scale energy storage systems can act as renewable energy integrators by smoothening the variability of large penetrations of wind power. Compress Air Energy Storage is one such technology. The aim of this paper is to examine the technical and economic feasibility of a combined gas storage and compressed air energy storage facility in the all-island Single Electricity Market of Northern Ireland and the Republic of Ireland in order to optimise power generation and wind power integration. This analysis is undertaken using the electricity market software PLEXOS ® for power systems by developing a model of a combined facility in 2020.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Renewable energy generation is expected to continue to increase globally due to renewable energy targets and obligations to reduce greenhouse gas emissions. Some renewable energy sources are variable power sources, for example wind, wave and solar. Energy storage technologies can manage the issues associated with variable renewable generation and align non-dispatchable renewable energy generation with load demands. Energy storage technologies can play different roles in each of the step of the electric power supply chain. Moreover, large scale energy storage systems can act as renewable energy integrators by smoothing the variability. Compressed air energy storage is one such technology. This paper examines the impacts of a compressed air energy storage facility in a pool based wholesale electricity market in a power system with a large renewable energy portfolio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We have used geophysics, microbiology, and geochemistry to link large-scale (30+ m) geophysical self-potential (SP) responses at a groundwater contaminant plume with its chemistry and microbial ecology of groundwater and soil from in and around it. We have found that microbially mediated transformation of ammonia to nitrite, nitrate, and nitrogen gas was likely to have promoted a well-defined electrochemical gradient at the edge of the plume, which dominated the SP response. Phylogenetic analysis demonstrated that the plume fringe or anode of the geobattery was dominated by electrogens and biodegradative microorganisms including Proteobacteria alongside Geobacteraceae, Desulfobulbaceae, and Nitrosomonadaceae. The uncultivated candidate phylum OD1 dominated uncontaminated areas of the site. We defined the redox boundary at the plume edge using the calculated and observed electric SP geophysical measurements. Conductive soils and waste acted as an electronic conductor, which was dominated by abiotic iron cycling processes that sequester electrons generated at the plume fringe. We have suggested that such geoelectric phenomena can act as indicators of natural attenuation processes that control groundwater plumes. Further work is required to monitor electron transfer across the geoelectric dipole to fully define this phenomenon as a geobattery. This approach can be used as a novel way of monitoring microbial activity around the degradation of contaminated groundwater plumes or to monitor in situ bioelectric systems designed to manage groundwater plumes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La recherche porte sur les patrons de distribution longitudinale (amont-aval) et transversale (rive nord - rive sud) des communautés de crustacés planctoniques qui ont été analysés le long du fleuve Saint-Laurent entre le lac Saint-François et la zone de transition estuarienne, à deux hydropériodes en mai (crue) et en août (étiage). Les données zooplanctoniques et environnementales ont été récoltées à 52 stations réparties sur 16 transects transversaux en 2006. Au chapitre 1, nous présentons les principaux modèles écosystémiques en rivière, une synthèse des facteurs influençant le zooplancton en rivières et les objectifs et hypothèses de recherche. Au chapitre 2, nous décrivons la structure des communautés de zooplancton dans trois zones biogéographiques du fleuve et 6 habitats longitudinaux, ainsi que les relations entre la structure du zooplancton et la distribution spatiale des masses d’eau et les variables environnementales. Au chapitre 3, nous réalisons une partition de la variation des variables spatiales AEM (basées sur la distribution des masses d’eau) et des variables environnementales pour évaluer quelle part de la variation du zooplancton est expliquée par les processus hydrologiques (variables AEM) et les conditions locales (facteurs environnementaux). Le gradient salinité-conductivité relié à la discontinuité fleuve-estuaire a déterminé la distribution à grande échelle du zooplancton. Dans les zones fluviales, la distribution du zooplancton est davantage influencée par la distribution des masses d’eau que par les facteurs environnementaux locaux. La distribution des masses d’eau explique une plus grande partie de la variation dans la distribution du zooplancton en août qu’en mai.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This a short presentation which introduces how models and modelling help us to solve large scale problems in the real world. It introduces the idea that dynamic behaviour is caused by interacting components in the system. Feedback in the system makes behaviour prediction difficult unless we use modelling to support understanding

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We tested the general predictions of increased use of nest boxes and positive trends in local populations of Common Goldeneye (Bucephala clangula) and Bufflehead (Bucephala albeola) following the large-scale provision of nest boxes in a study area of central Alberta over a 16-year period. Nest boxes were rapidly occupied, primarily by Common Goldeneye and Bufflehead, but also by European Starling (Sturnus vulgaris). After 5 years of deployment, occupancy of large boxes by Common Goldeneye was 82% to 90% and occupancy of small boxes by Bufflehead was 37% to 58%. Based on a single-stage cluster design, experimental closure of nest boxes resulted in significant reductions in numbers of broods and brood sizes produced by Common Goldeneye and Bufflehead. Occurrence and densities of Common Goldeneye and Bufflehead increased significantly across years following nest box deployment at the local scale, but not at the larger regional scale. Provision of nest boxes may represent a viable strategy for increasing breeding populations of these two waterfowl species on landscapes where large trees and natural cavities are uncommon but wetland density is high.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The banded organization of clouds and zonal winds in the atmospheres of the outer planets has long fascinated observers. Several recent studies in the theory and idealized modeling of geostrophic turbulence have suggested possible explanations for the emergence of such organized patterns, typically involving highly anisotropic exchanges of kinetic energy and vorticity within the dissipationless inertial ranges of turbulent flows dominated (at least at large scales) by ensembles of propagating Rossby waves. The results from an attempt to reproduce such conditions in the laboratory are presented here. Achievement of a distinct inertial range turns out to require an experiment on the largest feasible scale. Deep, rotating convection on small horizontal scales was induced by gently and continuously spraying dense, salty water onto the free surface of the 13-m-diameter cylindrical tank on the Coriolis platform in Grenoble, France. A “planetary vorticity gradient” or “β effect” was obtained by use of a conically sloping bottom and the whole tank rotated at angular speeds up to 0.15 rad s−1. Over a period of several hours, a highly barotropic, zonally banded large-scale flow pattern was seen to emerge with up to 5–6 narrow, alternating, zonally aligned jets across the tank, indicating the development of an anisotropic field of geostrophic turbulence. Using particle image velocimetry (PIV) techniques, zonal jets are shown to have arisen from nonlinear interactions between barotropic eddies on a scale comparable to either a Rhines or “frictional” wavelength, which scales roughly as (β/Urms)−1/2. This resulted in an anisotropic kinetic energy spectrum with a significantly steeper slope with wavenumber k for the zonal flow than for the nonzonal eddies, which largely follows the classical Kolmogorov k−5/3 inertial range. Potential vorticity fields show evidence of Rossby wave breaking and the presence of a “hyperstaircase” with radius, indicating instantaneous flows that are supercritical with respect to the Rayleigh–Kuo instability criterion and in a state of “barotropic adjustment.” The implications of these results are discussed in light of zonal jets observed in planetary atmospheres and, most recently, in the terrestrial oceans.