932 resultados para ENERGY BUDGET MODEL
Resumo:
Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche. Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images. Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation. Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs.
Resumo:
En apprentissage automatique, domaine qui consiste à utiliser des données pour apprendre une solution aux problèmes que nous voulons confier à la machine, le modèle des Réseaux de Neurones Artificiels (ANN) est un outil précieux. Il a été inventé voilà maintenant près de soixante ans, et pourtant, il est encore de nos jours le sujet d'une recherche active. Récemment, avec l'apprentissage profond, il a en effet permis d'améliorer l'état de l'art dans de nombreux champs d'applications comme la vision par ordinateur, le traitement de la parole et le traitement des langues naturelles. La quantité toujours grandissante de données disponibles et les améliorations du matériel informatique ont permis de faciliter l'apprentissage de modèles à haute capacité comme les ANNs profonds. Cependant, des difficultés inhérentes à l'entraînement de tels modèles, comme les minima locaux, ont encore un impact important. L'apprentissage profond vise donc à trouver des solutions, en régularisant ou en facilitant l'optimisation. Le pré-entraînnement non-supervisé, ou la technique du ``Dropout'', en sont des exemples. Les deux premiers travaux présentés dans cette thèse suivent cette ligne de recherche. Le premier étudie les problèmes de gradients diminuants/explosants dans les architectures profondes. Il montre que des choix simples, comme la fonction d'activation ou l'initialisation des poids du réseaux, ont une grande influence. Nous proposons l'initialisation normalisée pour faciliter l'apprentissage. Le second se focalise sur le choix de la fonction d'activation et présente le rectifieur, ou unité rectificatrice linéaire. Cette étude a été la première à mettre l'accent sur les fonctions d'activations linéaires par morceaux pour les réseaux de neurones profonds en apprentissage supervisé. Aujourd'hui, ce type de fonction d'activation est une composante essentielle des réseaux de neurones profonds. Les deux derniers travaux présentés se concentrent sur les applications des ANNs en traitement des langues naturelles. Le premier aborde le sujet de l'adaptation de domaine pour l'analyse de sentiment, en utilisant des Auto-Encodeurs Débruitants. Celui-ci est encore l'état de l'art de nos jours. Le second traite de l'apprentissage de données multi-relationnelles avec un modèle à base d'énergie, pouvant être utilisé pour la tâche de désambiguation de sens.
Resumo:
Agro-ecological resource use pattern in a traditional hill agricultural watershed in Garhwal Himalaya was analysed along an altitudinal transect. Thirty one food crops were found, although only 0.5% agriculture land is under irrigation in the area. Fifteen different tree species within agroforestry systems were located and their density varied from 30-90 trees/ha. Grain yield, fodder from agroforest trees and crop residue were observed to be highest between 1200 and 1600 m a.s.l. Also the annual energy input- output ratio per hectare was highest between 1200 and 1600 m a.s.l. (1.46). This higher input- output ratio between 1200-1600 m a.s.l. was attributed to the fact that green fodder, obtained from agroforestry trees, was considered as farm produce. The energy budget across altitudinal zones revealed 95% contribution of the farmyard manure and the maximum output was in terms of either crop residue (35%) or fodder (55%) from the agroforestry component. Presently on average 23%, 29% and 41% cattle were dependent on stall feeding in villages located at higher, lower and middle altitudes respectively. Similarly, fuel wood consumption was greatly influenced by altitude and family size. The efficiency and sustainability of the hill agroecosystem can be restored by strengthening of the agroforestry component. The approach will be appreciated by the local communities and will readily find their acceptance and can ensure their effective participation in the programme.
Resumo:
Previous work has demonstrated that observed and modeled climates show a near-time-invariant ratio of mean land to mean ocean surface temperature change under transient and equilibrium global warming. This study confirms this in a range of atmospheric models coupled to perturbed sea surface temperatures (SSTs), slab (thermodynamics only) oceans, and a fully coupled ocean. Away from equilibrium, it is found that the atmospheric processes that maintain the ratio cause a land-to-ocean heat transport anomaly that can be approximated using a two-box energy balance model. When climate is forced by increasing atmospheric CO2 concentration, the heat transport anomaly moves heat from land to ocean, constraining the land to warm in step with the ocean surface, despite the small heat capacity of the land. The heat transport anomaly is strongly related to the top-of-atmosphere radiative flux imbalance, and hence it tends to a small value as equilibrium is approached. In contrast, when climate is forced by prescribing changes in SSTs, the heat transport anomaly replaces ‘‘missing’’ radiative forcing over land by moving heat from ocean to land, warming the land surface. The heat transport anomaly remains substantial in steady state. These results are consistent with earlier studies that found that both land and ocean surface temperature changes may be approximated as local responses to global mean radiative forcing. The modeled heat transport anomaly has large impacts on surface heat fluxes but small impacts on precipitation, circulation, and cloud radiative forcing compared with the impacts of surface temperature change. No substantial nonlinearities are found in these atmospheric variables when the effects of forcing and surface temperature change are added.
Resumo:
We study the global atmospheric budgets of mass, moisture, energy and angular momentum in the latest reanalysis from the European Centre for Medium-Range Weather Forecasts (ECMWF), ERA-Interim, for the period 1989–2008 and compare with ERA-40. Most of the measures we use indicate that the ERA-Interim reanalysis is superior in quality to ERA-40. In ERA-Interim the standard deviation of the monthly mean global dry mass of 0.7 kg m−2 (0.007%) is slightly worse than in ERA-40, and long time-scale variations in dry mass originate predominately in the surface pressure field. The divergent winds are improved in ERA-Interim: the global standard deviation of the time-averaged dry mass budget residual is 10 kg m−2 day−1 and the quality of the cross-equatorial mass fluxes is improved. The temporal variations in the global evaporation minus precipitation (E − P) are too large but the global moisture budget residual is 0.003 kg m−2 day−1 with a spatial standard deviation of 0.3 kg m−2 day−1. Both the E − P over ocean and P − E over land are about 15% larger than the 1.1 Tg s−1 transport of water from ocean to land. The top of atmosphere (TOA) net energy losses are improved, with a value of 1 W m−2, but the meridional gradient of the TOA net energy flux is smaller than that from the Clouds and the Earth's Radiant Energy System (CERES) data. At the surface the global energy losses are worse, with a value of 7 W m−2. Over land however, the energy loss is only 0.5 W m−2. The downwelling thermal radiation at the surface in ERA-Interim of 341 W m−2 is towards the higher end of previous estimates. The global mass-adjusted energy budget residual is 8 W m−2 with a spatial standard deviation of 11 W m−2, and the mass-adjusted atmospheric energy transport from low to high latitudes (the sum for the two hemispheres) is 9.5 PW
Resumo:
A frequently used diagram summarizing the annual- and global-mean energy budget of the earth and atmosphere indicates that the irradiance reaching the top of the atmosphere from the surface, through the midinfrared atmospheric window, is 40 W m−2; this can be compared to the total outgoing longwave radiation (OLR) of about 235 W m−2. The value of 40 W m−2 was estimated in an ad hoc manner. A more detailed calculation of this component, termed here the surface transmitted irradiance (STI), is presented, using a line-by-line radiation code and 3D climatologies of temperature, humidity, cloudiness, etc. No assumption is made as to the wavelengths at which radiation from the surface can reach the top of the atmosphere. The role of the water vapor continuum is highlighted. In clear skies, if the continuum is excluded, the global- and annual-mean STI is calculated to be about 100 W m−2 with a broad maximum throughout the tropics and subtropics. When the continuum is included, the clear-sky STI is reduced to 66 W m−2, with a distinctly different geographic distribution, with a minimum in the tropics and local peaks over subtropical deserts. The inclusion of clouds reduces the STI to about 22 W m−2. The actual value is likely somewhat smaller due to processes neglected here, and an STI value of 20 W m−2 (with an estimated uncertainty of about ±20%) is suggested to be much more realistic than the previous estimate of 40 W m−2. This indicates that less than one-tenth of the OLR originates directly from the surface.
Resumo:
Nonlinear spectral transfers of kinetic energy and enstrophy, and stationary-transient interaction, are studied using global FGGE data for January 1979. It is found that the spectral transfers arise primarily from a combination, in roughly equal measure, of pure transient and mixed stationary-transient interactions. The pure transient interactions are associated with a transient eddy field which is approximately locally homogeneous and isotropic, and they appear to be consistently understood within the context of two-dimensional homogeneous turbulence. Theory based on spatial wale separation concepts suggests that the mixed interactions may be understood physically, to a first approximation, as a process of shear-induced spectral transfer of transient enstrophy along lines of constant zonal wavenumber. This essentially conservative enstrophy transfer generally involves highly nonlocal stationary-transient energy conversions. The observational analysis demonstrates that the shear-induced transient enstrophy transfer is mainly associated with intermediate-scale (zonal wavenumber m > 3) transients and is primarily to smaller (meridional) scales, so that the transient flow acts as a source of stationary energy. In quantitative terms, this transient-eddy rectification corresponds to a forcing timescale in the stationary energy budget which is of the same order of magnitude as most estimates of the damping timescale in simple stationary-wave models (5 to 15 days). Moreover, the nonlinear interactions involved are highly nonlocal and cover a wide range of transient scales of motion.
Resumo:
Many atmospheric constituents besides carbon dioxide (CO2) contribute to global warming, and it is common to compare their influence on climate in terms of radiative forcing, which measures their impact on the planetary energy budget. A number of recent studies have shown that many radiatively active constituents also have important impacts on the physiological functioning of ecosystems, and thus the ‘ecosystem services’ that humankind relies upon. CO2 increases have most probably increased river runoff and had generally positive impacts on plant growth where nutrients are non-limiting, whereas increases in near-surface ozone (O3) are very detrimental to plant productivity. Atmospheric aerosols increase the fraction of surface diffuse light, which is beneficial for plant growth. To illustrate these differences, we present the impact on net primary productivity and runoff of higher CO2, higher near-surface O3, and lower sulphate aerosols, and for equivalent changes in radiative forcing.We compare this with the impact of climate change alone, arising, for example, from a physiologically inactive gas such as methane (CH4). For equivalent levels of change in radiative forcing, we show that the combined climate and physiological impacts of these individual agents vary markedly and in some cases actually differ in sign. This study highlights the need to develop more informative metrics of the impact of changing atmospheric constituents that go beyond simple radiative forcing.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
Changes in the depth of Lake Viljandi between 1940 and 1990 were simulated using a lake water and energy-balance model driven by standard monthly weather data. Catchment runoff was simulated using a one-dimensional hydrological model, with a two-layer soil, a single-layer snowpack, a simple representation of vegetation cover and similarly modest input requirements. Outflow was modelled as a function of lake level. The simulated record of lake level and outflow matched observations of lake-level variations (r = 0.78) and streamflow (r = 0.87) well. The ability of the model to capture both intra- and inter-annual variations in the behaviour of a specific lake, despite the relatively simple input requirements, makes it extremely suitable for investigations of the impacts of climate change on lake water balance.
Resumo:
Accurate knowledge of the location and magnitude of ocean heat content (OHC) variability and change is essential for understanding the processes that govern decadal variations in surface temperature, quantifying changes in the planetary energy budget, and developing constraints on the transient climate response to external forcings. We present an overview of the temporal and spatial characteristics of OHC variability and change as represented by an ensemble of dynamical and statistical ocean reanalyses (ORAs). Spatial maps of the 0–300 m layer show large regions of the Pacific and Indian Oceans where the interannual variability of the ensemble mean exceeds ensemble spread, indicating that OHC variations are well-constrained by the available observations over the period 1993–2009. At deeper levels, the ORAs are less well-constrained by observations with the largest differences across the ensemble mostly associated with areas of high eddy kinetic energy, such as the Southern Ocean and boundary current regions. Spatial patterns of OHC change for the period 1997–2009 show good agreement in the upper 300 m and are characterized by a strong dipole pattern in the Pacific Ocean. There is less agreement in the patterns of change at deeper levels, potentially linked to differences in the representation of ocean dynamics, such as water mass formation processes. However, the Atlantic and Southern Oceans are regions in which many ORAs show widespread warming below 700 m over the period 1997–2009. Annual time series of global and hemispheric OHC change for 0–700 m show the largest spread for the data sparse Southern Hemisphere and a number of ORAs seem to be subject to large initialization ‘shock’ over the first few years. In agreement with previous studies, a number of ORAs exhibit enhanced ocean heat uptake below 300 and 700 m during the mid-1990s or early 2000s. The ORA ensemble mean (±1 standard deviation) of rolling 5-year trends in full-depth OHC shows a relatively steady heat uptake of approximately 0.9 ± 0.8 W m−2 (expressed relative to Earth’s surface area) between 1995 and 2002, which reduces to about 0.2 ± 0.6 W m−2 between 2004 and 2006, in qualitative agreement with recent analysis of Earth’s energy imbalance. There is a marked reduction in the ensemble spread of OHC trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints. The current work emphasizes the need to better observe the deep ocean, both for providing observational constraints for future ocean state estimation efforts and also to develop improved models and data assimilation methods.
Resumo:
Considering the sea ice decline in the Arctic during the last decades, polynyas are of high research interest since these features are core areas of new ice formation. The determination of ice formation requires accurate retrieval of polynya area and thin-ice thickness (TIT) distribution within the polynya.We use an established energy balance model to derive TITs with MODIS ice surface temperatures (Ts) and NCEP/DOE Reanalysis II in the Laptev Sea for two winter seasons. Improvements of the algorithm mainly concern the implementation of an iterative approach to calculate the atmospheric flux components taking the atmospheric stratification into account. Furthermore, a sensitivity study is performed to analyze the errors of the ice thickness. The results are the following: 1) 2-m air temperatures (Ta) and Ts have the highest impact on the retrieved ice thickness; 2) an overestimation of Ta yields smaller ice thickness errors as an underestimation of Ta; 3) NCEP Ta shows often a warm bias; and 4) the mean absolute error for ice thicknesses up to 20 cm is ±4.7 cm. Based on these results, we conclude that, despite the shortcomings of the NCEP data (coarse spatial resolution and no polynyas), this data set is appropriate in combination with MODIS Ts for the retrieval of TITs up to 20 cm in the Laptev Sea region. The TIT algorithm can be applied to other polynya regions and to past and future time periods. Our TIT product is a valuable data set for verification of other model and remote sensing ice thickness data.
Resumo:
The accurate estimate of the surface longwave fluxes contribution is important for the calculation of the surface radiation budget, which in turn controls all the components of the surface energy budget, such as evaporation and the sensible heat fluxes. This study evaluates the performance of the various downward longwave radiation parameterizations for clear and all-sky days applied to the Sertozinho region in So Paulo, Brazil. Equations have been adjusted to the observations of longwave radiation. The adjusted equations were evaluated for every hour throughout the day and the results showed good fits for most of the day, except near dawn and sunset, followed by nighttime. The seasonal variation was studied by comparing the dry period against the rainy period in the dataset. The least square linear regressions resulted in coefficients equal to the coefficients found for the complete period, both in the dry period and in the rainy period. It is expected that the best fit equation to the observed data for this site be used to produce estimates in other regions of the State of So Paulo, where such information is not available.
Resumo:
A new class of accelerating cosmological models driven by a one-parameter version of the general Chaplygin-type equation of state is proposed. The simplified version is naturally obtained from causality considerations with basis on the adiabatic sound speed vs plus the observed accelerating stage of the universe. We show that very stringent constraints on the unique free parameter a describing the simplified Chaplygin model can be obtained from a joint analysis involving the latest SNe type la data and the recent Sloan Digital Sky Survey measurement of baryon acoustic oscillations (BAO). In our analysis we have considered separately the SNe type la gold sample measured by [A.G. Riess et al.. Astrophys. J. 607 (2004) 665] and the supernova legacy survey (SNLS) from [P. Astier et al., Astron. Astrophys. 447 (2006) 31]. At 95.4% (c.l.), we find for BAO + gold sample, 0.91 <= alpha <= 1.0 and Omega(m) = 0.28(-0.048)(+0.043) while BAO + SNLS analysis provides 0.94 <= alpha <= 1.0 and Omega(m) = 0.27(-0.045)(+0.048). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This work reports on the excited-state absorption spectrum of oxidized Cytochrome c (Fe(3+)) dissolved in water, measured with the Z-scan technique with femtosecond laser pulses. The excited-state absorption cross-sections between 460 and 560 nm were determined with the aid of a three-energy-level model. Reverse saturable absorption was observed below 520 nm, while a saturable absorption process occurs in the Q-band, located around 530 nm. Above 560 nm, a competition between saturable absorption and two-photon absorption was inferred. These results show that Cytochrome c presents distinct nonlinear behaviors, which may be useful to study electron transfer chemistry in proteins by one- and two-photon absorption. In addition, owing to these nonlinear optical features, this molecule may be employed in applications involving photodynamics therapy and saturable absorbers. (C) 2009 Elsevier B.V. All rights reserved.