966 resultados para One-pass scheme
Resumo:
Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.
Resumo:
In recent years, ZigBee has been proven to be an excellent solution to create scalable and flexible home automation networks. In a home automation network, consumer devices typically collect data from a home monitoring environment and then transmit the data to an end user through multi-hop communication without the need for any human intervention. However, due to the presence of typical obstacles in a home environment, error-free reception may not be possible, particularly for power constrained devices. A mobile sink based data transmission scheme can be one solution but obstacles create significant complexities for the sink movement path determination process. Therefore, an obstacle avoidance data routing scheme is of vital importance to the design of an efficient home automation system. This paper presents a mobile sink based obstacle avoidance routing scheme for a home monitoring system. The mobile sink collects data by traversing through the obstacle avoidance path. Through ZigBee based hardware implementation and verification, the proposed scheme successfully transmits data through the obstacle avoidance path to improve network performance in terms of life span, energy consumption and reliability. The application of this work can be applied to a wide range of intelligent pervasive consumer products and services including robotic vacuum cleaners and personal security robots1.
Resumo:
Changes in the depth of Lake Viljandi between 1940 and 1990 were simulated using a lake water and energy-balance model driven by standard monthly weather data. Catchment runoff was simulated using a one-dimensional hydrological model, with a two-layer soil, a single-layer snowpack, a simple representation of vegetation cover and similarly modest input requirements. Outflow was modelled as a function of lake level. The simulated record of lake level and outflow matched observations of lake-level variations (r = 0.78) and streamflow (r = 0.87) well. The ability of the model to capture both intra- and inter-annual variations in the behaviour of a specific lake, despite the relatively simple input requirements, makes it extremely suitable for investigations of the impacts of climate change on lake water balance.
Resumo:
This thesis is an empirical-based study of the European Union’s Emissions Trading Scheme (EU ETS) and its implications in terms of corporate environmental and financial performance. The novelty of this study includes the extended scope of the data coverage, as most previous studies have examined only the power sector. The use of verified emissions data of ETS-regulated firms as the environmental compliance measure and as the potential differentiating criteria that concern the valuation of EU ETS-exposed firms in the stock market is also an original aspect of this study. The study begins in Chapter 2 by introducing the background information on the emission trading system (ETS), which focuses on (i) the adoption of ETS as an environmental management instrument and (ii) the adoption of ETS by the European Union as one of its central climate policies. Chapter 3 surveys four databases that provide carbon emissions data in order to determine the most suitable source of the data to be used in the later empirical chapters. The first empirical chapter, which is also Chapter 4 of this thesis, investigates the determinants of the emissions compliance performance of the EU ETS-exposed firms through constructing the best possible performance ratio from verified emissions data and self-configuring models for a panel regression analysis. Chapter 5 examines the impacts on the EU ETS-exposed firms in terms of their equity valuation with customised portfolios and multi-factor market models. The research design takes into account the emissions allowance (EUA) price as an additional factor, as it has the most direct association with the EU ETS to control for the exposure. The final empirical Chapter 6 takes the investigation one step further, by specifically testing the degree of ETS exposure facing different sectors with sector-based portfolios and an extended multi-factor market model. The findings from the emissions performance ratio analysis show that the business model of firms significantly influences emissions compliance, as the capital intensity has a positive association with the increasing emissions-to-emissions cap ratio. Furthermore, different sectors show different degrees of sensitivity towards the determining factors. The production factor influences the performance ratio of the Utilities sector, but not the Energy or Materials sectors. The results show that the capital intensity has a more profound influence on the utilities sector than on the materials sector. With regard to the financial performance impact, ETS-exposed firms as aggregate portfolios experienced a substantial underperformance during the 2001–2004 period, but not in the operating period of 2005–2011. The results of the sector-based portfolios show again the differentiating effect of the EU ETS on sectors, as one sector is priced indifferently against its benchmark, three sectors see a constant underperformance, and three sectors have altered outcomes.
Resumo:
This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach
Resumo:
The Surface Urban Energy and Water Balance Scheme (SUEWS) is evaluated at two locations in the UK: a dense urban site in the centre of London and a residential suburban site in Swindon. Eddy covariance observations of the turbulent fluxes are used to assess model performance over a twoyear period (2011-2013). The distinct characteristics of the sites mean their surface energy exchanges differ considerably. The model suggests the largest differences can be attributed to surface cover (notably the proportion of vegetated versus impervious area) and the additional energy supplied by human activities. SUEWS performs better in summer than winter, and better at the suburban site than the dense urban site. One reason for this is the bias towards suburban summer field campaigns in observational data used to parameterise this (and other) model(s). The suitability of model parameters (such as albedo, energy use and water use) for the UK sites is considered and, where appropriate, alternative values are suggested. An alternative parameterisation for the surface conductance is implemented, which permits greater soil moisture deficits before evaporation is restricted at non-irrigated sites. Accounting for seasonal variation in the estimation of storage heat flux is necessary to obtain realistic wintertime fluxes.
Resumo:
The influence of the aspect ratio (building height/street canyon width) and the mean building height of cities on local energy fluxes and temperatures is studied by means of an Urban Canopy Model (UCM) coupled with a one-dimensional second-order turbulence closure model. The UCM presented is similar to the Town Energy Balance (TEB) model in most of its features but differs in a few important aspects. In particular, the street canyon walls are treated separately which leads to a different budget of radiation within the street canyon walls. The UCM has been calibrated using observations of incoming global and diffuse solar radiation, incoming long-wave radiation and air temperature at a site in So Paulo, Brazil. Sensitivity studies with various aspect ratios have been performed to assess their impact on urban temperatures and energy fluxes at the top of the canopy layer. In these simulations, it is assumed that the anthropogenic heat flux and latent heat fluxes are negligible. Results show that the simulated net radiation and sensible heat fluxes at the top of the canopy decrease and the stored heat increases as the aspect ratio increases. The simulated air temperature follows the behavior of the sensible heat flux. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The General Ocean Turbulence Model (GOTM) is applied to the diagnostic turbulence field of the mixing layer (ML) over the equatorial region of the Atlantic Ocean. Two situations were investigated: rainy and dry seasons, defined, respectively, by the presence of the intertropical convergence zone and by its northward displacement. Simulations were carried out using data from a PIRATA buoy located on the equator at 23 degrees W to compute surface turbulent fluxes and from the NASA/GEWEX Surface Radiation Budget Project to close the surface radiation balance. A data assimilation scheme was used as a surrogate for the physical effects not present in the one-dimensional model. In the rainy season, results show that the ML is shallower due to the weaker surface stress and stronger stable stratification; the maximum ML depth reached during this season is around 15 m, with an averaged diurnal variation of 7 m depth. In the dry season, the stronger surface stress and the enhanced surface heat balance components enable higher mechanical production of turbulent kinetic energy and, at night, the buoyancy acts also enhancing turbulence in the first meters of depth, characterizing a deeper ML, reaching around 60 m and presenting an average diurnal variation of 30 m.
Resumo:
In this article, we present an analytical direct method, based on a Numerov three-point scheme, which is sixth order accurate and has a linear execution time on the grid dimension, to solve the discrete one-dimensional Poisson equation with Dirichlet boundary conditions. Our results should improve numerical codes used mainly in self-consistent calculations in solid state physics.
Resumo:
In this work an efficient third order non-linear finite difference scheme for solving adaptively hyperbolic systems of one-dimensional conservation laws is developed. The method is based oil applying to the solution of the differential equation an interpolating wavelet transform at each time step, generating a multilevel representation for the solution, which is thresholded and a sparse point representation is generated. The numerical fluxes obtained by a Lax-Friedrichs flux splitting are evaluated oil the sparse grid by an essentially non-oscillatory (ENO) approximation, which chooses the locally smoothest stencil among all the possibilities for each point of the sparse grid. The time evolution of the differential operator is done on this sparse representation by a total variation diminishing (TVD) Runge-Kutta method. Four classical examples of initial value problems for the Euler equations of gas dynamics are accurately solved and their sparse solutions are analyzed with respect to the threshold parameters, confirming the efficiency of the wavelet transform as an adaptive grid generation technique. (C) 2008 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
The traveling salesman problem is although looking very simple problem but it is an important combinatorial problem. In this thesis I have tried to find the shortest distance tour in which each city is visited exactly one time and return to the starting city. I have tried to solve traveling salesman problem using multilevel graph partitioning approach.Although traveling salesman problem itself very difficult as this problem is belong to the NP-Complete problems but I have tried my best to solve this problem using multilevel graph partitioning it also belong to the NP-Complete problems. I have solved this thesis by using the k-mean partitioning algorithm which divides the problem into multiple partitions and solving each partition separately and its solution is used to improve the overall tour by applying Lin Kernighan algorithm on it. Through all this I got optimal solution which proofs that solving traveling salesman problem through graph partition scheme is good for this NP-Problem and through this we can solved this intractable problem within few minutes.Keywords: Graph Partitioning Scheme, Traveling Salesman Problem.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Traditional cutoff regularization schemes of the Nambu-Jona-Lasinio model limit the applicability of the model to energy-momentum scales much below the value of the regularizing cutoff. In particular, the model cannot be used to study quark matter with Fermi momenta larger than the cutoff. In the present work, an extension of the model to high temperatures and densities recently proposed by Casalbuoni, Gatto, Nardulli, and Ruggieri is used in connection with an implicit regularization scheme. This is done by making use of scaling relations of the divergent one-loop integrals that relate these integrals at different energy-momentum scales. Fixing the pion decay constant at the chiral symmetry breaking scale in the vacuum, the scaling relations predict a running coupling constant that decreases as the regularization scale increases, implementing in a schematic way the property of asymptotic freedom of quantum chromodynamics. If the regularization scale is allowed to increase with density and temperature, the coupling will decrease with density and temperature, extending in this way the applicability of the model to high densities and temperatures. These results are obtained without specifying an explicit regularization. As an illustration of the formalism, numerical results are obtained for the finite density and finite temperature quark condensate and applied to the problem of color superconductivity at high quark densities and finite temperature.
Resumo:
A renormalization scheme for the nucleon-nucleon (NN) interaction based on a subtracted T-matrix equation is proposed and applied to the one-pion-exchange potential supplemented by contact interactions. The singlet and triplet scattering lengths are given to fix the renormalized strengths of the contact interactions. With only one scaling parameter (μ), the results show an overall very good agreement with neutron-proton data, particularly for the observables related to the triplet channel. The agreement is qualitative in the 1 S0 channel. Between the low-energy NN observables we have examined, the mixing parameter of the 3S1-3D1 states is the most sensitive to the scale. The scheme is renormalization group invariant for μ → ∞. © 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The negative-dimensional integration method (NDIM) is revealing itself as a very useful technique for computing massless and/or massive Feynman integrals, covariant and noncovanant alike. Up until now however, the illustrative calculations done using such method have been mostly covariant scalar integrals/without numerator factors. We show here how those integrals with tensorial structures also can be handled straightforwardly and easily. However, contrary to the absence of significant features in the usual approach, here the NDIM also allows us to come across surprising unsuspected bonuses. Toward this end, we present two alternative ways of working out the integrals and illustrate them by taking the easiest Feynman integrals in this category that emerge in the computation of a standard one-loop self-energy diagram. One of the novel and heretofore unsuspected bonuses is that there are degeneracies in the way one can express the final result for the referred Feynman integral.