41 resultados para Threshold crypto-graphic schemes and algorithms
Resumo:
The Weather Research and Forecasting model was applied to analyze variations in the planetary boundary layer (PBL) structure over Southeast England including central and suburban London. The parameterizations and predictive skills of two nonlocal mixing PBL schemes, YSU and ACM2, and two local mixing PBL schemes, MYJ and MYNN2, were evaluated over a variety of stability conditions, with model predictions at a 3 km grid spacing. The PBL height predictions, which are critical for scaling turbulence and diffusion in meteorological and air quality models, show significant intra-scheme variance (> 20%), and the reasons are presented. ACM2 diagnoses the PBL height thermodynamically using the bulk Richardson number method, which leads to a good agreement with the lidar data for both unstable and stable conditions. The modeled vertical profiles in the PBL, such as wind speed, turbulent kinetic energy (TKE), and heat flux, exhibit large spreads across the PBL schemes. The TKE predicted by MYJ were found to be too small and show much less diurnal variation as compared with observations over London. MYNN2 produces better TKE predictions at low levels than MYJ, but its turbulent length scale increases with height in the upper part of the strongly convective PBL, where it should decrease. The local PBL schemes considerably underestimate the entrainment heat fluxes for convective cases. The nonlocal PBL schemes exhibit stronger mixing in the mean wind fields under convective conditions than the local PBL schemes and agree better with large-eddy simulation (LES) studies.
Resumo:
Wireless video sensor networks have been a hot topic in recent years; the monitoring capability is the central feature of the services offered by a wireless video sensor network can be classified into three major categories: monitoring, alerting, and information on-demand. These features have been applied to a large number of applications related to the environment (agriculture, water, forest and fire detection), military, buildings, health (elderly people and home monitoring), disaster relief, area and industrial monitoring. Security applications oriented toward critical infrastructures and disaster relief are very important applications that many countries have identified as critical in the near future. This paper aims to design a cross layer based protocol to provide the required quality of services for security related applications using wireless video sensor networks. Energy saving, delay and reliability for the delivered data are crucial in the proposed application. Simulation results show that the proposed cross layer based protocol offers a good performance in term of providing the required quality of services for the proposed application.
Resumo:
This study investigates the quality of retail milk labelled as Jersey & Guernsey (JG) when compared with milk without breed specifications (NS) and repeatability of differences over seasons and years. 16 different brands of milk (4 Jersey & Guernsey, 12 non specified breed) were sampled over 2 years on 4 occasions. JG milk was associated with both favourable traits for human health, such as the higher total protein, total casein, α-casein, β-casein, κ-casein and α-tocopherol contents, and unfavourable traits, such as the higher concentrations of saturated fat, C12:0, C14:0 and lower concentrations of monounsaturated fatty acids. In summer, JG milk had a higher omega-3:omega-6 ratio than had NS milk. Also, the relative increase in omega-3 fatty acids and α-tocopherol, from winter to summer, was greater in JG milk. The latter characteristic could be of use in breeding schemes and farming systems producing niche dairy products. Seasonality had a more marked impact on the fatty acid composition of JG milk than had NS milk, while the opposite was found for protein composition. Potential implication for the findings in human health, producers, industry and consumers are considered.
Resumo:
Sweetness is generally a desirable taste, however consumers can be grouped into sweet likers and dislikers according to optimally preferred sucrose concentrations. Understanding the levels of sweetness in products that are acceptable and unacceptable to both consumer groups is important to product development and for influencing dietary habits. The concentrations at which sucrose decreases liking (the rejection threshold; RjT) in liquid and semi-solid matrices were investigated in this study. Thirty six consumers rated their liking of 5 sucrose aqueous solutions; this identified 36% sweet likers (SL) whose liking ratings increased with increasing sucrose and 64% sweet dislikers (SD) whose liking ratings decreased above 6% (w/v) sucrose. We hypothesized that SL and SD would have different RjT for sucrose in products. This was tested by preparing 8 levels of sucrose in orange juice and orange jelly and presenting each against the lowest level in forced choice preference tests. In orange juice, as sucrose increased from 33g/L to 75g/L the proportion of people preferring the sweeter sample increased in both groups. However, at higher sucrose levels, the proportion of consumers preferring the sweet sample decreased. For SD, a RjT was reached at 380 g/L, whereas a significant RjT for SL was not reached. RjT in jelly were not reached as the sweetness in orange jelly was significantly lower than for orange juice (p<0.001). Despite statistically significant differences in rated sweetness between SL and SD (p=0.019), the extent of difference between the two groups was minor. The results implied that sweet liker status was not substantially related to differences in sweetness perception. Self-reported dietary intake of carbohydrate, sugars and sucrose were not significantly affected by sweet liker status. However the failure to find an effect may be due to the small sample size and future studies within a larger, more representative population sample are justifiable from the results of this study.
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.
Resumo:
Three naming strategies are discussed that allow the processes of a distributed application to continue being addressed by their original logical name, along all the migrations they may be forced to undertake because of performance-improvement goals. A simple centralised solution is firstly discussed which showed a software bottleneck with the increase of the number of processes; other two solutions are considered that entail different communication schemes and different communication overheads for the naming protocol. All these strategies are based on the facility that each process is allowed to survive after migration, even in its original site, only to provide a forwarding service to those communications that used its obsolete address.
Resumo:
Threshold Error Correction Models are used to analyse the term structure of interest Rates. The paper develops and uses a generalisation of existing models that encompasses both the Band and Equilibrium threshold models of [Balke and Fomby ((1997) Threshold cointegration. Int Econ Rev 38(3):627–645)] and estimates this model using a Bayesian approach. Evidence is found for threshold effects in pairs of longer rates but not in pairs of short rates. The Band threshold model is supported in preference to the Equilibrium model.
Resumo:
The paper presents the method and findings of a Delphi expert survey to assess the impact of UK government farm animal welfare policy, form assurance schemes and major food retailer specifications on the welfare of animals on forms. Two case-study livestock production systems are considered, dairy and cage egg production. The method identifies how well the various standards perform in terms of their effects on a number of key farm animal welfare variables, and provides estimates of the impact of the three types of standard on the welfare of animals on forms, taking account of producer compliance. The study highlights that there remains considerable scope for government policy, together with form assurance schemes, to improve the welfare of form animals by introducing standards that address key factors affecting animal welfare and by increasing compliance of livestock producers. There is a need for more comprehensive, regular and random surveys of on-farm welfare to monitor compliance with welfare standards (legislation and welfare codes) and the welfare of farm animals over time, and a need to collect farm data on the costs of compliance with standards.
Resumo:
The World Bank, United Nations and UK Department for International Development (DfID) have spearheaded a recent global drive to regularize artisanal and small-scale mining (ASM), and provide assistance to its predominantly impoverished participants. To date, millions of dollars have been pledged toward the design of industry-specific policies and regulations; implementation of mechanized equipment; extension; and the launch of alternative livelihood (AL) programmes aimed at diversifying local economies. Much of this funding, however, has failed to facilitate marked improvements, and in many cases, has exacerbated problems. This paper argues that a poor understanding of artisanal, mine-community dynamics and operators’ needs has, in a number of cases, led to the design and implementation of inappropriate industry support schemes and interventions. The discussion focuses upon experiences from sub-Saharan Africa, where ASM is in the most rudimentary of states.
Resumo:
Inferring the spatial expansion dynamics of invading species from molecular data is notoriously difficult due to the complexity of the processes involved. For these demographic scenarios, genetic data obtained from highly variable markers may be profitably combined with specific sampling schemes and information from other sources using a Bayesian approach. The geographic range of the introduced toad Bufo marinus is still expanding in eastern and northern Australia, in each case from isolates established around 1960. A large amount of demographic and historical information is available on both expansion areas. In each area, samples were collected along a transect representing populations of different ages and genotyped at 10 microsatellite loci. Five demographic models of expansion, differing in the dispersal pattern for migrants and founders and in the number of founders, were considered. Because the demographic history is complex, we used an approximate Bayesian method, based on a rejection-regression algorithm. to formally test the relative likelihoods of the five models of expansion and to infer demographic parameters. A stepwise migration-foundation model with founder events was statistically better supported than other four models in both expansion areas. Posterior distributions supported different dynamics of expansion in the studied areas. Populations in the eastern expansion area have a lower stable effective population size and have been founded by a smaller number of individuals than those in the northern expansion area. Once demographically stabilized, populations exchange a substantial number of effective migrants per generation in both expansion areas, and such exchanges are larger in northern than in eastern Australia. The effective number of migrants appears to be considerably lower than that of founders in both expansion areas. We found our inferences to be relatively robust to various assumptions on marker. demographic, and historical features. The method presented here is the only robust, model-based method available so far, which allows inferring complex population dynamics over a short time scale. It also provides the basis for investigating the interplay between population dynamics, drift, and selection in invasive species.
Resumo:
A finite difference scheme based on flux difference splitting is presented for the solution of the Euler equations for the compressible flow of an ideal gas. A linearised Riemann problem is defined, and a scheme based on numerical characteristic decomposition is presented for obtaining approximate solutions to the linearised problem. An average of the flow variables across the interface between cells is required, and this average is chosen to be the arithmetic mean for computational efficiency, leading to arithmetic averaging. This is in contrast to the usual ‘square root’ averages found in this type of Riemann solver, where the computational expense can be prohibitive. The method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second order scheme which avoids nonphysical, spurious oscillations. The scheme is applied to a shock tube problem and a blast wave problem. Each approximate solution compares well with those given by other schemes, and for the shock tube problem is in agreement with the exact solution.
Resumo:
We assessed the vulnerability of blanket peat to climate change in Great Britain using an ensemble of 8 bioclimatic envelope models. We used 4 published models that ranged from simple threshold models, based on total annual precipitation, to Generalised Linear Models (GLMs, based on mean annual temperature). In addition, 4 new models were developed which included measures of water deficit as threshold, classification tree, GLM and generalised additive models (GAM). Models that included measures of both hydrological conditions and maximum temperature provided a better fit to the mapped peat area than models based on hydrological variables alone. Under UKCIP02 projections for high (A1F1) and low (B1) greenhouse gas emission scenarios, 7 out of the 8 models showed a decline in the bioclimatic space associated with blanket peat. Eastern regions (Northumbria, North York Moors, Orkney) were shown to be more vulnerable than higher-altitude, western areas (Highlands, Western Isles and Argyle, Bute and The Trossachs). These results suggest a long-term decline in the distribution of actively growing blanket peat, especially under the high emissions scenario, although it is emphasised that existing peatlands may well persist for decades under a changing climate. Observational data from long-term monitoring and manipulation experiments in combination with process-based models are required to explore the nature and magnitude of climate change impacts on these vulnerable areas more fully.
Resumo:
Many natural and technological applications generate time ordered sequences of networks, defined over a fixed set of nodes; for example time-stamped information about ‘who phoned who’ or ‘who came into contact with who’ arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time’s arrow is captured naturally through the non-mutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.