969 resultados para routing and wavelength assignment
Resumo:
With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.
Resumo:
The chemical specificity of terahertz spectroscopy, when combined with techniques for sub-wavelength sensing, is giving new understanding of processes occurring at the nanometre scale in biological systems and offers the potential for single molecule detection of chemical and biological agents and explosives. In addition, terahertz techniques are enabling the exploration of the fundamental behaviour of light when it interacts with nanoscale optical structures, and are being used to measure ultrafast carrier dynamics, transport and localisation in nanostructures. This chapter will explain how terahertz scale modelling can be used to explore the fundamental physics of nano-optics, it will discuss the terahertz spectroscopy of nanomaterials, terahertz near-field microscopy and other sub-wavelength techniques, and summarise recent developments in the terahertz spectroscopy and imaging of biological systems at the nanoscale. The potential of using these techniques for security applications will be considered.
Resumo:
Modeling the vertical penetration of photosynthetically active radiation (PAR) through the ocean, and its utilization by phytoplankton, is fundamental to simulating marine primary production. The variation of attenuation and absorption of light with wavelength suggests that photosynthesis should be modeled at high spectral resolution, but this is computationally expensive. To model primary production in global 3d models, a balance between computer time and accuracy is necessary. We investigate the effects of varying the spectral resolution of the underwater light field and the photosynthetic efficiency of phytoplankton (α∗), on primary production using a 1d coupled ecosystem ocean turbulence model. The model is applied at three sites in the Atlantic Ocean (CIS (∼60°N), PAP (∼50°N) and ESTOC (∼30°N)) to include the effect of different meteorological forcing and parameter sets. We also investigate three different methods for modeling α∗ – as a fixed constant, varying with both wavelength and chlorophyll concentration [Bricaud, A., Morel, A., Babin, M., Allali, K., Claustre, H., 1998. Variations of light absorption by suspended particles with chlorophyll a concentration in oceanic (case 1) waters. Analysis and implications for bio-optical models. J. Geophys. Res. 103, 31033–31044], and using a non-spectral parameterization [Anderson, T.R., 1993. A spectrally averaged model of light penetration and photosynthesis. Limnol. Oceanogr. 38, 1403–1419]. After selecting the appropriate ecosystem parameters for each of the three sites we vary the spectral resolution of light and α∗ from 1 to 61 wavebands and study the results in conjunction with the three different α∗ estimation methods. The results show modeled estimates of ocean primary productivity are highly sensitive to the degree of spectral resolution and α∗. For accurate simulations of primary production and chlorophyll distribution we recommend a spectral resolution of at least six wavebands if α∗ is a function of wavelength and chlorophyll, and three wavebands if α∗ is a fixed value.
Resumo:
The paper traces the evolution of the tally from a receipt for cash payments into the treasury, to proof of payments made by royal officials outside of the treasury and finally to an assignment of revenue to be paid out by royal officials. Each of these processes is illustrated by examples drawn from the Exchequer records and explains their significance for royal finance and for historians working on the Exchequer records.
Processing reflexives in a second language: the timing of structural and discourse-level information
Resumo:
We report the results from two eye-movement monitoring experiments examining the processing of reflexive pronouns by proficient German-speaking learners of second language (L2) English. Our results show that the nonnative speakers initially tried to link English argument reflexives to a discourse-prominent but structurally inaccessible antecedent, thereby violating binding condition A. Our native speaker controls, in contrast, showed evidence of applying condition A immediately during processing. Together, our findings show that L2 learners’ initial focusing on a structurally inaccessible antecedent cannot be due to first language influence and is also independent of whether the inaccessible antecedent c-commands the reflexive. This suggests that unlike native speakers, nonnative speakers of English initially attempt to interpret reflexives through discourse-based coreference assignment rather than syntactic binding.
Resumo:
Bleaching spectra of the ‘fast’ and ‘medium’ optically stimulated luminescence (OSL) components of quartz are reported. A dependence of photoionization cross-section, σ, on wavelength was observed for the fast and medium components and a significant difference in their responses to stimulation wavelength was found. The ratio of the fast and medium photoionization cross-sections, σfast/σmedium, varied from 30.6 when stimulated with View the MathML source light to 1.4 at View the MathML source. At View the MathML source the fast and medium photoionization cross-sections were found to be sufficiently different that infrared bleaching at raised temperatures allowed the selective removal of the fast component with negligible depletion of the medium. A method for optically separating the OSL components of quartz is suggested, based on the wavelength dependence of photoionization cross-sections.
Resumo:
We present a summary of the principal physical and optical properties of aerosol particles using the FAAM BAE-146 instrumented aircraft during ADRIEX between 27 August and 6 September 2004, augmented by sunphotometer, lidar and satellite retrievals. Observations of anthropogenic aerosol, principally from industrial sources, were concentrated over the northern Adriatic Sea and over the Po Valley close to the aerosol sources. An additional flight was also carried out over the Black Sea to compare east and west European pollution. Measurements show the single-scattering albedo of dry aerosol particles to vary considerably between 0.89 and 0.97 at a wavelength of 0.55 μm, with a campaign mean within the polluted lower free troposphere of 0.92. Although aerosol concentrations varied significantly from day to day and during individual days, the shape of the aerosol size distribution was relatively consistent through the experiment, with no detectable change observed over land and over sea. There is evidence to suggest that the pollution aerosol within the marine boundary layer was younger than that in the elevated layer. Trends in the aerosol volume distribution show consistency with multiple-site AERONET radiometric observations. The aerosol optical depths derived from aircraft measurements show a consistent bias to lower values than both the AERONET and lidar ground-based radiometric observations, differences which can be explained by local variations in the aerosol column loading and by some aircraft instrumental artefacts. Retrievals of the aerosol optical depth and fine-mode (<0.5 μm radius) fraction contribution to the optical depth using MODIS data from the Terra and Aqua satellites show a reasonable level of agreement with the AERONET and aircraft measurements.
Resumo:
A global river routing scheme coupled to the ECMWF land surface model is implemented and tested within the framework of the Global Soil Wetness Project II, to evaluate the feasibility of modelling global river runoff at a daily time scale. The exercise is designed to provide benchmark river runoff predictions needed to verify the land surface model. Ten years of daily runoff produced by the HTESSEL land surface scheme is input into the TRIP2 river routing scheme in order to generate daily river runoff. These are then compared to river runoff observations from the Global Runoff Data Centre (GRDC) in order to evaluate the potential and the limitations. A notable source of inaccuracy is bias between observed and modelled discharges which is not primarily due to the modelling system but instead of to the forcing and quality of observations and seems uncorrelated to the river catchment size. A global sensitivity analysis and Generalised Likelihood Uncertainty Estimation (GLUE) uncertainty analysis are applied to the global routing model. The ground water delay parameter is identified as being the most sensitive calibration parameter. Significant uncertainties are found in results, and those due to parameterisation of the routing model are quantified. The difficulty involved in parameterising global river discharge models is discussed. Detailed river runoff simulations are shown for the river Danube, which match well observed river runoff in upstream river transects. Results show that although there are errors in runoff predictions, model results are encouraging and certainly indicative of useful runoff predictions, particularly for the purpose of verifying the land surface scheme hydrologicly. Potential of this modelling system on future applications such as river runoff forecasting and climate impact studies is highlighted. Copyright © 2009 Royal Meteorological Society.
Resumo:
The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants’ eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants’ reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time.
Resumo:
In Kazakhstan, a transitional nation in Central Asia, the development of public–private partnerships (PPPs) is at its early stage and increasingly of strategic importance. This case study investigates risk allocation in an ongoing project: the construction and operation of 11 kindergartens in the city of Karaganda in the concession form for 14 years. Drawing on a conceptual framework of effective risk allocation, the study identifies principal PPP risks, provides a critical assessment of how and in what way each partner bears a certain risk, highlights the reasons underpinning risk allocation decisions and delineates the lessons learned. The findings show that the government has effectively transferred most risks to the private sector partner, whilst both partners share the demand risk of childcare services and the project default risk. The strong elements of risk allocation include clear assignment of parties’ responsibilities, streamlined financing schemes and incentives to complete the main project phases on time. However, risk allocation has missed an opportunity to create incentives for service quality improvements and take advantage of economies of scale. The most controversial element of risk allocation, as the study finds, is a revenue stream that an operator is supposed to receive from the provision of services unrelated to childcare, as neither partner is able to mitigate this revenue risk. The article concludes that in the kindergartens’ PPP, the government has achieved almost complete transfer of risks to the private sector partner. However, the costs of transfer are extensive government financial outlays that seriously compromise the PPP value for money.
Resumo:
Simultaneous scintillometer measurements at multiple wavelengths (pairing visible or infrared with millimetre or radio waves) have the potential to provide estimates of path-averaged surface fluxes of sensible and latent heat. Traditionally, the equations to deduce fluxes from measurements of the refractive index structure parameter at the two wavelengths have been formulated in terms of absolute humidity. Here, it is shown that formulation in terms of specific humidity has several advantages. Specific humidity satisfies the requirement for a conserved variable in similarity theory and inherently accounts for density effects misapportioned through the use of absolute humidity. The validity and interpretation of both formulations are assessed and the analogy with open-path infrared gas analyser density corrections is discussed. Original derivations using absolute humidity to represent the influence of water vapour are shown to misrepresent the latent heat flux. The errors in the flux, which depend on the Bowen ratio (larger for drier conditions), may be of the order of 10%. The sensible heat flux is shown to remain unchanged. It is also verified that use of a single scintillometer at optical wavelengths is essentially unaffected by these new formulations. Where it may not be possible to reprocess two-wavelength results, a density correction to the latent heat flux is proposed for scintillometry, which can be applied retrospectively to reduce the error.
Resumo:
A biomization method, which objectively assigns individual pollen assemblages to biomes ( Prentice et al., 1996 ), was tested using modern pollen data from Japan and applied to fossil pollen data to reconstruct palaeovegetation patterns 6000 and 18,000 14C yr bp Biomization started with the assignment of 135 pollen taxa to plant functional types (PFTs), and nine possible biomes were defined by specific combinations of PFTs. Biomes were correctly assigned to 54% of the 94 modern sites. Incorrect assignments occur near the altitudinal limits of individual biomes, where pollen transport from lower altitudes blurs the local pollen signals or continuous changes in species composition characterizes the range limits of biomes. As a result, the reconstructed changes in the altitudinal limits of biomes at 6000 and 18,000 14C yr bp are likely to be conservative estimates of the actual changes. The biome distribution at 6000 14C yr bp was rather similar to today, suggesting that changes in the bioclimate of Japan have been small since the mid-Holocene. At 18,000 14C yr bp the Japanese lowlands were covered by taiga and cool mixed forests. The southward expansion of these forests and the absence of broadleaved evergreen/warm mixed forests reflect a pronounced year-round cooling.
Resumo:
Pollen data from China for 6000 and 18,000 14C yr bp were compiled and used to reconstruct palaeovegetation patterns, using complete taxon lists where possible and a biomization procedure that entailed the assignment of 645 pollen taxa to plant functional types. A set of 658 modern pollen samples spanning all biomes and regions provided a comprehensive test for this procedure and showed convincing agreement between reconstructed biomes and present natural vegetation types, both geographically and in terms of the elevation gradients in mountain regions of north-eastern and south-western China. The 6000 14C yr bp map confirms earlier studies in showing that the forest biomes in eastern China were systematically shifted northwards and extended westwards during the mid-Holocene. Tropical rain forest occurred on mainland China at sites characterized today by either tropical seasonal or broadleaved evergreen/warm mixed forest. Broadleaved evergreen/warm mixed forest occurred further north than today, and at higher elevation sites within the modern latitudinal range of this biome. The northern limit of temperate deciduous forest was shifted c. 800 km north relative to today. The 18,000 14C yr bp map shows that steppe and even desert vegetation extended to the modern coast of eastern China at the last glacial maximum, replacing today’s temperate deciduous forest. Tropical forests were excluded from China and broadleaved evergreen/warm mixed forest had retreated to tropical latitudes, while taiga extended southwards to c. 43°N.
Resumo:
This letter has tested the canopy height profile (CHP) methodology as a way of effective leaf area index (LAIe) and vertical vegetation profile retrieval at a single-tree level. Waveform and discrete airborne LiDAR data from six swaths, as well as from the combined data of six swaths, were used to extract the LAIe of a single live Callitris glaucophylla tree. LAIe was extracted from raw waveform as an intermediate step in the CHP methodology, with two different vegetation-ground reflectance ratios. Discrete point LAIe estimates were derived from the gap probability using the following: 1) single ground returns and 2) all ground returns. LiDAR LAIe retrievals were subsequently compared to hemispherical photography estimates, yielding mean values within ±7% of the latter, depending on the method used. The CHP of a single dead Callitris glaucophylla tree, representing the distribution of vegetation material, was verified with a field profile manually reconstructed from convergent photographs taken with a fixed-focal-length camera. A binwise comparison of the two profiles showed very high correlation between the data reaching R2 of 0.86 for the CHP from combined swaths. Using a study-area-adjusted reflectance ratio improved the correlation between the profiles, but only marginally in comparison to using an arbitrary ratio of 0.5 for the laser wavelength of 1550 nm.
Resumo:
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.