988 resultados para integer disaggregation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new technique for the harmonic analysis of current observations is described. It consists in applying a linear band pass filter which separates the various species and removes the contribution of non-tidal effects at intertidal frequencies. The tidal constituents are then evaluated through the method of least squares. In spite of the narrowness of the filter, only three days of data are lost through the filtering procedure and the only requirement on the data is that the time interval between samples be an integer fraction of one day. This technique is illustrated through the analysis of a few French current observations from the English Channel within the framework of INOUT. The characteristics of the main tidal constituents are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the first high-resolution (500 m × 500 m) gridded methane (CH4) emission inventory for Switzerland, which integrates the national emission totals reported to the United Nations Framework Convention on Climate Change (UNFCCC) and recent CH4 flux studies conducted by research groups across Switzerland. In addition to anthropogenic emissions, we also include natural and semi-natural CH4 fluxes, i.e., emissions from lakes and reservoirs, wetlands, wild animals as well as uptake by forest soils. National CH4 emissions were disaggregated using detailed geostatistical information on source locations and their spatial extent and process- or area-specific emission factors. In Switzerland, the highest CH4 emissions in 2011 originated from the agricultural sector (150 Gg CH4/yr), mainly produced by ruminants and manure management, followed by emissions from waste management (15 Gg CH4/yr) mainly from landfills and the energy sector (12 Gg CH4/yr), which was dominated by emissions from natural gas distribution. Compared to the anthropogenic sources, emissions from natural and semi-natural sources were relatively small (6 Gg CH4/yr), making up only 3 % of the total emissions in Switzerland. CH4 fluxes from agricultural soils were estimated to be not significantly different from zero (between -1.5 and 0 Gg CH4/yr), while forest soils are a CH4 sink (approx. -2.8 Gg CH4/yr), partially offsetting other natural emissions. Estimates of uncertainties are provided for the different sources, including an estimate of spatial disaggregation errors deduced from a comparison with a global (EDGAR v4.2) and a European CH4 inventory (TNO/MACC). This new spatially-explicit emission inventory for Switzerland will provide valuable input for regional scale atmospheric modeling and inverse source estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Copper porphyrins have been recognized as natural constituents of marine sediments only within the past 5 years (Palmer and Baker, 1978, Science201, 49-51). In that report it was suggested that these pigments may derive from and be markers for oxidized terrestrial organic matter redeposited in the marine environment. In the present study we describe the distribution of copper porphyrins in sediments from several north Pacific and Gulf of California DSDP/IPQD sites (Legs 56,63,64). These allochthonous pigments have now been found to be accompanied by identical arrays of highly dealkylated nickel etioporphyrins. Evaluation of data from this and past studies clearly reveals that there is a strong carbon-number distribution similarity betweeen coincident Cu and Ni etioporphyrins. This homology match is taken as reflecting a common source for the tetrapyrrole ligands of this population of Cu and Ni chelates. Predepositional generation of these highly dealkylated etioporphyrins is concluded from the occurrence of these pigments in sediments continuing essentially all stages of in situ chlorophyll diagenesis (cf. Baker and Louda, 1983). That is, their presence is not regulated by the in situ diagenetic continuum. Thus, the highly dealkylated Cu and Ni etioporphyrins represent an 'allochthonous' background over which 'autochthonous' (viz. marine produced) chlorophyll derivatives are deposited and are undergoing in situ diagenesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have studied the chemical zoning of plagioclase phenocrysts from the slow-spreading Mid-Atlantic Ridge and the intermediate-spreading rate Costa Rica Rift to obtain the time scales of magmatic processes beneath these ridges. The anorthite content, Mg, and Sr in plagioclase phenocrysts from the Mid-Atlantic Ridge can be interpreted as recording initial crystallisation from a primitive magma (~11 wt% MgO) in an open system. This was followed by crystal accumulation in a mush zone and later entrainment of crystals into the erupted magma. The initial magma crystallised plagioclase more anorthitic than those in equilibrium with any erupted basalt. Evidence that the crystals accumulated in a mush zone comes from both: (1) plagioclase rims that were in equilibrium with a Sr-poor melt requiring extreme differentiation; and (2) different crystals found in the same thin section having different histories. Diffusion modelling shows that crystal residence times in the mush were <140 years, whereas the interval between mush disaggregation and eruption was ?1.5 years. Zoning of anorthite content and Mg in plagioclase phenocrysts from the Costa Rica Rift show that they partially or completely equilibrated with a MgO-rich melt (>11 wt%). Partial equilibration in some crystals can be modelled as starting <1 year prior to eruption but for others longer times are required for complete equilibration. This variety of times is most readily explained if the mixing occurred in a mush zone. None of the plagioclase phenocrysts from the Costa Rica Rift that we studied have Mg contents in equilibrium with their host basalt even at their rims, requiring mixing into a much more evolved magma within days of eruption. In combination these observations suggest that at both intermediate- and slow-spreading ridges: (1) the chemical environment to which crystals are exposed changes on annual to decadal time scales; (2) plagioclase crystals record the existence of melts unlike those erupted; and (3) disaggregation of crystal mush zones appears to precede eruption, providing an efficient mechanism by which evolved interstitial melt can be mixed into erupted basalts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Late Jurassic-early Cretaceous black shales and an overlying sequence of Albian-Campanian zeolitic claystones from the Falkland Plateau (DSDP/IPOD Leg 71, Site 511) were analyzed for tetrapyrrole pigment type and abundance. The "black shale" sequence was found to be rich in DPEP-series dominated free-base, nickel (Ni) and, to a lesser extent, vanadyl (V = 0) porphyrins. A low level of organic maturity (i.e. precatagenesis) is indicated for these strata as nickel chelation by free-base porphyrins is only 50-75% complete, proceeding down-hole to 627 meters sub-bottom. Electronic and mass spectral data reveal that the proposed benzo-DPEP (BD) and tetrahydrobenzo-DPEP (THBD) series are present in the free-base and Ni species, as well as the more usual occurrence in V = 0 porphyrin arrays. Highly reducing conditions are suggested by an abundance of the PAH perylene, substantial amounts of the THBD/BD series and a redox equilibrium between free-base DPEP and 7,8-dihydro-DPEP series, which exist in a 7:1 molar ratio. The Albian-Campanian claystone strata were found to be tetrapyrrolepoor, and those pigments present were typed as Cu/Ni highly dealkylated (C26 max.) etioporphyrins, thought to be derived via redeposition and oxidation of terrestrial organic matter (OM). Results from the present study are correlated to our past analyses of Jurassic-Cretaceous sediments from Atlantic margins in an effort to relate tetrapyrrole quality and quantity to basin evolution and OM sources in the proto-Atlantic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particles sinking out of the euphotic zone are important vehicles of carbon export from the surface ocean. Most of the particles produce heavier aggregates by coagulating with each other before they sink. We implemented an aggregation model into the biogeochemical model of Regional Oceanic Modelling System (ROMS) to simulate the distribution of particles in the water column and their downward transport in the Northwest African upwelling region. Accompanying settling chamber, sediment trap and particle camera measurements provide data for model validation. In situ aggregate settling velocities measured by the settling chamber were around 55 m d**-1. Aggregate sizes recorded by the particle camera hardly exceeded 1 mm. The model is based on a continuous size spectrum of aggregates, characterised by the prognostic aggregate mass and aggregate number concentration. Phytoplankton and detritus make up the aggregation pool, which has an averaged, prognostic and size dependent sinking. Model experiments were performed with dense and porous approximations of aggregates with varying maximum aggregate size and stickiness as well as with the inclusion of a disaggregation term. Similar surface productivity in all experiments has been generated in order to find the best combination of parameters that produce measured deep water fluxes. Although the experiments failed to represent surface particle number spectra, in the deep water some of them gave very similar slope and spectrum range as the particle camera observations. Particle fluxes at the mesotrophic sediment trap site off Cape Blanc (CB) have been successfully reproduced by the porous experiment with disaggregation term when particle remineralisation rate was 0.2 d**-1. The aggregation-disaggregation model improves the prediction capability of the original biogeochemical model significantly by giving much better estimates of fluxes for both upper and lower trap. The results also point to the need for more studies to enhance our knowledge on particle decay and its variation and to the role that stickiness play in the distribution of vertical fluxes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Environmental constraints imposed on hydropoweroperation are usually given in the form of minimum environmental flows and maximum and minimum rates of change of flows, or ramp rates. One solution proposed to mitigate the environmental impact caused by the flows discharged by a hydropower plant while reducing the economic impact of the above-mentioned constraints consists in building a re-regulationreservoir, or afterbay, downstream of the power plant. Adding pumpingcapability between the re-regulationreservoir and the main one could contribute both to reducing the size of the re-regulationreservoir, with the consequent environmental improvement, and to improving the economic feasibility of the project, always fulfilling the environmental constraints imposed to hydropoweroperation. The objective of this paper is studying the contribution of a re-regulationreservoir to fulfilling the environmental constraints while reducing the economic impact of said constraints. For that purpose, a revenue-driven optimization model based on mixed integer linear programming is used. Additionally, the advantages of adding pumpingcapability are analysed. In order to illustrate the applicability of the methodology, a case study based on a real hydropower plant is presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new free library for Constraint Logic Programming over Finite Domains, included with the Ciao Prolog system. The library is entirely written in Prolog, leveraging on Ciao's module system and code transformation capabilities in order to achieve a highly modular design without compromising performance. We describe the interface, implementation, and design rationale of each modular component. The library meets several design goals: a high level of modularity, allowing the individual components to be replaced by different versions; highefficiency, being competitive with other TT> implementations; a glass-box approach, so the user can specify new constraints at different levels; and a Prolog implementation, in order to ease the integration with Ciao's code analysis components. The core is built upon two small libraries which implement integer ranges and closures. On top of that, a finite domain variable datatype is defined, taking care of constraint reexecution depending on range changes. These three libraries form what we call the TT> kernel of the library. This TT> kernel is used in turn to implement several higher-level finite domain constraints, specified using indexicals. Together with a labeling module this layer forms what we name the TT> solver. A final level integrates the CLP (J7©) paradigm with our TT> solver. This is achieved using attributed variables and a compiler from the CLP (J7©) language to the set of constraints provided by the solver. It should be noted that the user of the library is encouraged to work in any of those levels as seen convenient: from writing a new range module to enriching the set of TT> constraints by writing new indexicals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new hazard-consistent ground motion characterization of the Itoiz dam site, located in Northern Spain. Firstly, we propose a methodology with different approximation levels to the expected ground motion at the dam site. Secondly, we apply this methodology taking into account the particular characteristics of the site and of the dam. Hazard calculations were performed following the Probabilistic Seismic Hazard Assessment method using a logic tree, which accounts for different seismic source zonings and different ground-motion attenuation relationships. The study was done in terms of peak ground acceleration and several spectral accelerations of periods coinciding with the fundamental vibration periods of the dam. In order to estimate these ground motions we consider two different dam conditions: when the dam is empty (T = 0.1 s) and when it is filled with water to its maximum capacity (T = 0.22 s). Additionally, seismic hazard analysis is done for two return periods: 975 years, related to the project earthquake, and 4,975 years, identified with an extreme event. Soil conditions were also taken into account at the site of the dam. Through the proposed methodology we deal with different forms of characterizing ground motion at the study site. In a first step, we obtain the uniform hazard response spectra for the two return periods. In a second step, a disaggregation analysis is done in order to obtain the controlling earthquakes that can affect the dam. Subsequently, we characterize the ground motion at the dam site in terms of specific response spectra for target motions defined by the expected values SA (T) of T = 0.1 and 0.22 s for the return periods of 975 and 4,975 years, respectively. Finally, synthetic acceleration time histories for earthquake events matching the controlling parameters are generated using the discrete wave-number method and subsequently analyzed. Because of the short relative distances between the controlling earthquakes and the dam site we considered finite sources in these computations. We conclude that directivity effects should be taken into account as an important variable in this kind of studies for ground motion characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electrostatic plasma waves excited by a uniform, alternating electric field of arbitrary intensity are studied on the basis of the Vlasov equation; their dispersion relation, which involves the determinant of either of two infinite matrices, is derived. For ω0 ≫ ωpi (ω0 being the applied frequency and ωpi the ion plasma frequency) the waves may be classified in two groups, each satisfying a simple condition; this allows writing the dispersion relation in closed form. Both groups coalesce (resonance) if (a) ω0  ≈  ωpe/r (r any integer) and (b) the wavenumber k is small. A nonoscillatory instability is found; its distinction from the DuBois‐Goldman instability and its physical origin are discussed. Conditions for its excitation (in particular, upper limits to ω0,k, and k⋅vE,vE being the field‐induced electron velocity), and simple equations for the growth rate are given off‐resonance and at ω0  ≈  ωpi. The dependence of both threshold and maximum growth rate on various parameters is discussed, and the results are compared with those of Silin and Nishikawa. The threshold at ω0  ≈  ωpi/r,r  ≠  1, is studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growth of the Internet has increased the need for scalable congestion control mechanisms in high speed networks. In this context, we propose a rate-based explicit congestion control mechanism with which the sources are provided with the rate at which they can transmit. These rates are computed with a distributed max-min fair algorithm, SLBN. The novelty of SLBN is that it combines two interesting features not simultaneously present in existing proposals: scalability and fast convergence to the max-min fair rates, even under high session churn. SLBN is scalable because routers only maintain a constant amount of state information (only three integer variables per link) and only incur a constant amount of computation per protocol packet, independently of the number of sessions that cross the router. Additionally, SLBN does not require processing any data packet, and it converges independently of sessions' RTT. Finally, by design, the protocol is conservative when assigning rates, even in the presence of high churn, which helps preventing link overshoots in transient periods. We claim that, with all these features, our mechanism is a good candidate to be used in real deployments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Nakagami-m distribution is widely used for the simulation of fading channels in wireless communications. A novel, simple and extremely efficient acceptance-rejection algorithm is introduced for the generation of independent Nakagami-m random variables. The proposed method uses another Nakagami density with a half-integer value of the fading parameter, mp ¼ n/2 ≤ m, as proposal function, from which samples can be drawn exactly and easily. This novel rejection technique is able to work with arbitrary values of m ≥ 1, average path energy, V, and provides a higher acceptance rate than all currently available methods. RESUMEN. Método extremadamente eficiente para generar variables aleatorias de Nakagami (utilizadas para modelar el desvanecimiento en canales de comunicaciones móviles) basado en "rejection sampling".

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Following recent accounting and ethical scandals within the Telecom Industry like Gowex case, old cards are laid on the table: what kind of management and control are we doing on our businesses and what use do we give to the specific tools we have at our disposition? There are indicators, that on a very specific, concise and accurate manner, aside from brief, allow us to analyze and capture the complexity of a business and also they constitute an important support when making optimal decisions. These instruments or indicators show, a priori, all relevant data from a purely economic perspective, while there also exist, the possibility of including factors that are not of this nature strictly. For instance, there are indicators that take into account the customer?s satisfaction, the corporate reputation among others. Both kind of performance indicators form, together, an integral dashboard while the pure economic side of it could be considered as a basic dashboard. Based on DuPont?s methodology, we will be able to calculate the ROI (Return on Investment) of a company from the disaggregation of very useful and much needed indicators like the ROE (Return on Equity) or the ROA (Return on Assets); thereby, we will be able to get to know, to control and, hence, to optimize the company?s leverage level, its liquidity ratio or its solvency ratio, among others; as well as the yield we will be able to obtain if our decisions and management are optimal related to the bodies of assets. Bear in mind and make the most of the abovementioned management tools and indicators that we have at our disposition, allow us to act knowing our path and taking full responsibility, as well as, to obtain the maximum planned benefits, instead of leaving them to be casual. We will be able to avoid errors that can lead the company to an unfortunate and non-desirable situation and, of course, we will detect, way in advance, the actual needs of the business in terms of accounting and financial sanitation before irreversible situations are reached.