793 resultados para Probabilistic Aggregation Criteria
Resumo:
The efficiency of a Wireless Power Transfer (WPT) system is greatly dependent on both the geometry and operating frequency of the transmitting and receiving structures. By using Coupled Mode Theory (CMT), the figure of merit is calculated for resonantly-coupled loop and dipole systems. An in-depth analysis of the figure of merit is performed with respect to the key geometric parameters of the loops and dipoles, along with the resonant frequency, in order to identify the key relationships leading to high-efficiency WPT. For systems consisting of two identical single-turn loops, it is shown that the choice of both the loop radius and resonant frequency are essential in achieving high-efficiency WPT. For the dipole geometries studied, it is shown that the choice of length is largely irrelevant and that as a result of their capacitive nature, low-MHz frequency dipoles are able to produce significantly higher figures of merit than those of the loops considered. The results of the figure of merit analysis are used to propose and subsequently compare two mid-range loop and dipole WPT systems of equal size and operating frequency, where it is shown that the dipole system is able to achieve higher efficiencies than the loop system of the distance range examined.
Resumo:
The C-type lectin receptor CLEC-2 is expressed primarily on the surface of platelets, where it is present as a dimer, and is found at low level on a subpopulation of other hematopoietic cells, including mouse neutrophils [1–4] Clustering of CLEC-2 by the snake venom toxin rhodocytin, specific antibodies or its endogenous ligand, podoplanin, elicits powerful activation of platelets through a pathway that is similar to that used by the collagen receptor glycoprotein VI (GPVI) [4–6]. The cytosolic tail of CLEC-2 contains a conserved YxxL sequence preceded by three upstream acidic amino acid residues, which together form a novel motif known as a hemITAM. Ligand engagement induces tyrosine phosphorylation of the hemITAM sequence providing docking sites for the tandem-SH2 domains of the tyrosine kinase Syk across a CLEC-2 receptor dimer [3]. Tyrosine phosphorylation of Syk by Src family kinases and through autophosphorylation leads to stimulation of a downstream signaling cascade that culminates in activation of phospholipase C γ2 (PLCγ2) [4,6]. Recently, CLEC-2 has been proposed to play a major role in supporting activation of platelets at arteriolar rates of flow [1]. Injection of a CLEC-2 antibody into mice causes a sustained depletion of the C-type lectin receptor from the platelet surface [1]. The CLEC-2-depleted platelets were unresponsive to rhodocytin but underwent normal aggregation and secretion responses after stimulation of other platelet receptors, including GPVI [1]. In contrast, there was a marked decrease in aggregate formation relative to controls when CLEC-2-depleted blood was flowed at arteriolar rates of shear over collagen (1000 s−1 and 1700 s−1) [1]. Furthermore, antibody treatment significantly increased tail bleeding times and mice were unable to occlude their vessels after ferric chloride injury [1]. These data provide evidence for a critical role for CLEC-2 in supporting platelet aggregation at arteriolar rates of flow. The underlying mechanism is unclear as platelets do not express podoplanin, the only known endogenous ligand of CLEC-2. In the present study, we have investigated the role of CLEC-2 in platelet aggregation and thrombus formation using platelets from a novel mutant mouse model that lacks functional CLEC-2.
Resumo:
The archaeological site of Kharaneh IV in Jordan's Azraq Basin, and its relatively near neighbour Jilat 6 show evidence of sustained occupation of substantial size through the Early to Middle Epipalaeolithic (c. 24,000 - 15,000 cal BP). Here we review the geomorphological evidence for the environmental setting in which Kharaneh IV was established. The on-site stratigraphy is clearly differentiated from surrounding sediments, marked visually as well as by higher magnetic susceptibility values. Dating and analysis of off-site sediments show that a significant wetland existed at the site prior to and during early site occupation (~ 23,000 - 19,000 BP). This may explain why such a substantial site existed at this location. This wetland dating to the Last Glacial Maximum also provides important information on the palaeoenvironments and potential palaeoclimatic scenarios for today's eastern Jordanian desert, from where such evidence is scarce.
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition
Resumo:
Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
In the global construction context, the best value or most economically advantageous tender is becoming a widespread approach for contractor selection, as an alternative to other traditional awarding criteria such as the lowest price. In these multi-attribute tenders, the owner or auctioneer solicits proposals containing both a price bid and additional technical features. Once the proposals are received, each bidder’s price bid is given an economic score according to a scoring rule, generally called an economic scoring formula (ESF) and a technical score according to pre-specified criteria. Eventually, the contract is awarded to the bidder with the highest weighted overall score (economic + technical). However, economic scoring formula selection by auctioneers is invariably and paradoxically a highly intuitive process in practice, involving few theoretical or empirical considerations, despite having been considered traditionally and mistakenly as objective, due to its mathematical nature. This paper provides a taxonomic classification of a wide variety of ESFs and abnormally low bids criteria (ALBC) gathered in several countries with different tendering approaches. Practical implications concern the optimal design of price scoring rules in construction contract tenders, as well as future analyses of the effects of the ESF and ALBC on competitive bidding behaviour.
Resumo:
Idealized explicit convection simulations of the Met Office Unified Model exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen in other models in previous studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapor field. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy (MSE), following Wing and Emanuel [2014], reveals that the direct radiative feedback (including significant cloud longwave effects) is dominant in both the initial development of self-aggregation and the maintenance of an aggregated state. A low-level circulation at intermediate stages of aggregation does appear to transport MSE from drier to moister regions, but this circulation is mostly balanced by other advective effects of opposite sign and is forced by horizontal anomalies of convective heating (not radiation). Sensitivity studies with either fixed prescribed radiative cooling, fixed prescribed surface fluxes, or both do not show full self-aggregation from homogeneous initial conditions, though fixed surface fluxes do not disaggregate an initialized aggregated state. A sensitivity study in which rain evaporation is turned off shows more rapid self-aggregation, while a run with this change plus fixed radiative cooling still shows strong self-aggregation, supporting a “moisture memory” effect found in Muller and Bony [2015]. Interestingly, self-aggregation occurs even in simulations with sea surface temperatures (SSTs) of 295 K and 290 K, with direct radiative feedbacks dominating the budget of MSE variance, in contrast to results in some previous studies.
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.
Resumo:
The objective of this study was to evaluate the possible use of biometric testicular traits as selection criteria for young Nellore bulls using Bayesian inference to estimate heritability coefficients and genetic correlations. Multitrait analysis was performed including 17,211 records of scrotal circumference obtained during andrological assessment (SCAND) and 15,313 records of testicular volume and shape. In addition, 50,809 records of scrotal circumference at 18 mo (SC18), used as an anchor trait, were analyzed. The (co) variance components and breeding values were estimated by Gibbs sampling using the Gibbs2F90 program under an animal model that included contemporary groups as fixed effects, age of the animal as a linear covariate, and direct additive genetic effects as random effects. Heritabilities of 0.42, 0.43, 0.31, 0.20, 0.04, 0.16, 0.15, and 0.10 were obtained for SC18, SCAND, testicular volume, testicular shape, minor defects, major defects, total defects, and satisfactory andrological evaluation, respectively. The genetic correlations between SC18 and the other traits were 0.84 (SCAND), 0.75 (testicular shape), 0.44 (testicular volume), -0.23 (minor defects), -0.16 (major defects), -0.24 (total defects), and 0.56 (satisfactory andrological evaluation). Genetic correlations of 0.94 and 0.52 were obtained between SCAND and testicular volume and shape, respectively, and of 0.52 between testicular volume and testicular shape. In addition to favorable genetic parameter estimates, SC18 was found to be the most advantageous testicular trait due to its easy measurement before andrological assessment of the animals, even though the utilization of biometric testicular traits as selection criteria was also found to be possible. In conclusion, SC18 and biometric testicular traits can be adopted as a selection criterion to improve the fertility of young Nellore bulls.
Resumo:
To plan testing activities, testers face the challenge of determining a strategy, including a test coverage criterion that offers an acceptable compromise between the available resources and test goals. Known theoretical properties of coverage criteria do not always help and, thus, empirical data are needed. The results of an experimental evaluation of several coverage criteria for finite state machines (FSMs) are presented, namely, state and transition coverage; initialisation fault and transition fault coverage. The first two criteria focus on FSM structure, whereas the other two on potential faults in FSM implementations. The authors elaborate a comparison approach that includes random generation of FSM, construction of an adequate test suite and test minimisation for each criterion to ensure that tests are obtained in a uniform way. The last step uses an improved greedy algorithm.
Resumo:
We investigate the critical behaviour of a probabilistic mixture of cellular automata (CA) rules 182 and 200 (in Wolfram`s enumeration scheme) by mean-field analysis and Monte Carlo simulations. We found that as we switch off one CA and switch on the other by the variation of the single parameter of the model, the probabilistic CA (PCA) goes through an extinction-survival-type phase transition, and the numerical data indicate that it belongs to the directed percolation universality class of critical behaviour. The PCA displays a characteristic stationary density profile and a slow, diffusive dynamics close to the pure CA 200 point that we discuss briefly. Remarks on an interesting related stochastic lattice gas are addressed in the conclusions.
Resumo:
We describe in this article the application of a high-density gas aggregation nanoparticle gun to the production and characterization of high anisotropy SmCo nanoparticles. We give a detailed description of the simple but efficient experimental apparatus with a focus on the microscopic processes of the gas aggregation technique. Using high values of gas flux (similar to 45 sccm) we are able to operate in regimes of high collimation of material. In this regime, as we explain in terms of a phenomenological model, the power applied to the sputtering target becomes the main variable to change the size of the clusters. Also presented are the morphological, structural, and magnetic characterizations of SmCo nanoparticles produced using 10 and 50 W of sputtering power. These values resulted in mean sizes of similar to 12 and similar to 20 nm. Significant differences are seen in the structural and magnetic properties of the samples with the 50 W sample showing a largely enhanced crystalline structure and magnetic anisotropy.