860 resultados para two-stage sampling
Resumo:
Many automated negotiation models have been developed to solve the conflict in many distributed computational systems. However, the problem of finding win-win outcome in multiattribute negotiation has not been tackled well. To address this issue, based on an evolutionary method of multiobjective optimization, this paper presents a negotiation model that can find win-win solutions of multiple attributes, but needs not to reveal negotiating agents' private utility functions to their opponents or a third-party mediator. Moreover, we also equip our agents with a general type of utility functions of interdependent multiattributes, which captures human intuitions well. In addition, we also develop a novel time-dependent concession strategy model, which can help both sides find a final agreement among a set of win-win ones. Finally, lots of experiments confirm that our negotiation model outperforms the existing models developed recently. And the experiments also show our model is stable and efficient in finding fair win-win outcomes, which is seldom solved in the existing models. © 2012 Wiley Periodicals, Inc.
Resumo:
The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.
Resumo:
Parameter optimization of a two-stage Raman fibre converters (RFC) based on phosphosilicate core fiber was presented. The optimal operational regime was determined and tolerance of the converter against variations of laser parameters was analyzed. Converter was pumped by ytterbium-doped double-clad fibre laser with a maximum output power of 3.8W at 1061 nm. A phosphosilicate-core RFC with enhanced performance was fabricated using the results of numerical modelling.
Resumo:
Two-stage data envelopment analysis (DEA) efficiency models identify the efficient frontier of a two-stage production process. In some two-stage processes, the inputs to the first stage are shared by the second stage, known as shared inputs. This paper proposes a new relational linear DEA model for dealing with measuring the efficiency score of two-stage processes with shared inputs under constant returns-to-scale assumption. Two case studies of banking industry and university operations are taken as two examples to illustrate the potential applications of the proposed approach.
Resumo:
In our investigation we are expanding a Bertrand-Edgeworth duopoly into a two-stage game in which during the first stage the firms can select their rationing rule. We will show that under certain conditions the efficient rationing rule is an equilibrium action of the first stage.
Resumo:
Two direct sampling correlator-type receivers for differential chaos shift keying (DCSK) communication systems under frequency non-selective fading channels are proposed. These receivers operate based on the same hardware platform with different architectures. In the first scheme, namely sum-delay-sum (SDS) receiver, the sum of all samples in a chip period is correlated with its delayed version. The correlation value obtained in each bit period is then compared with a fixed threshold to decide the binary value of recovered bit at the output. On the other hand, the second scheme, namely delay-sum-sum (DSS) receiver, calculates the correlation value of all samples with its delayed version in a chip period. The sum of correlation values in each bit period is then compared with the threshold to recover the data. The conventional DCSK transmitter, frequency non-selective Rayleigh fading channel, and two proposed receivers are mathematically modelled in discrete-time domain. The authors evaluated the bit error rate performance of the receivers by means of both theoretical analysis and numerical simulation. The performance comparison shows that the two proposed receivers can perform well under the studied channel, where the performances get better when the number of paths increases and the DSS receiver outperforms the SDS one.
Resumo:
In a team of multiple agents, the pursuance of a common goal is a defining characteristic. Since agents may have different capabilities, and effects of actions may be uncertain, a common goal can generally only be achieved through a careful cooperation between the different agents. In this work, we propose a novel two-stage planner that combines online planning at both team level and individual level through a subgoal delegation scheme. The proposal brings the advantages of online planning approaches to the multi-agent setting. A number of modifications are made to a classical UCT approximate algorithm to (i) adapt it to the application domains considered, (ii) reduce the branching factor in the underlying search process, and (iii) effectively manage uncertain information of action effects by using information fusion mechanisms. The proposed online multi-agent planner reduces the cost of planning and decreases the temporal cost of reaching a goal, while significantly increasing the chance of success of achieving the common goal.
Resumo:
Abstract not available
Resumo:
Abstract not available
Resumo:
[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.
Resumo:
We propose an alternative crack propagation algo- rithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algo- rithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equa- tions is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algo- rithm, we use five quasi-brittle benchmarks, all successfully solved.
Resumo:
Introduction: The National Oceanic and Atmospheric Administration’s Biogeography Branch has conducted surveys of reef fish in the Caribbean since 1999. Surveys were initially undertaken to identify essential fish habitat, but later were used to characterize and monitor reef fish populations and benthic communities over time. The Branch’s goals are to develop knowledge and products on the distribution and ecology of living marine resources and provide resource managers, scientists and the public with an improved ecosystem basis for making decisions. The Biogeography Branch monitors reef fishes and benthic communities in three study areas: (1) St. John, USVI, (2) Buck Island, St. Croix, USVI, and (3) La Parguera, Puerto Rico. In addition, the Branch has characterized the reef fish and benthic communities in the Flower Garden Banks National Marine Sanctuary, Gray’s Reef National Marine Sanctuary and around the island of Vieques, Puerto Rico. Reef fish data are collected using a stratified random sampling design and stringent measurement protocols. Over time, the sampling design has changed in order to meet different management objectives (i.e. identification of essential fish habitat vs. monitoring), but the designs have always remained: • Probabilistic – to allow inferences to a larger targeted population, • Objective – to satisfy management objectives, and • Stratified – to reduce sampling costs and obtain population estimates for strata. There are two aspects of the sampling design which are now under consideration and are the focus of this report: first, the application of a sample frame, identified as a set of points or grid elements from which a sample is selected; and second, the application of subsampling in a two-stage sampling design. To evaluate these considerations, the pros and cons of implementing a sampling frame and subsampling are discussed. Particular attention is paid to the impacts of each design on accuracy (bias), feasibility and sampling cost (precision). Further, this report presents an analysis of data to determine the optimal number of subsamples to collect if subsampling were used. (PDF contains 19 pages)
Resumo:
ENGLISH: Methods of collecting samples for the purpose of estimating the numbers and weights of fish caught, by length interval, are described. Several models for two-stage sampling are described, and the equations for the estimators and their variances are given. The results from a brief simulation study are used to show the differences between estimates made with the different models. Estimators for the average weights of fish in the catch and their variances are also described. These average weights are used to provide improved estimates of the total annual catches of yellowfin taken from the eastern Pacific Ocean, east of 150°W, between 1955 and 1990. SPANISH: Se describen los métodos de recoger de muestreo para estimar el número o peso de peces capturados, por intervalo de talla. Se describen varios modelos para el muestreo de dos etapas, y se presentan las ecuaciones para los estimadores y sus varianzas. Se usan los resultados de un breve estudio de simulación para indicar las diferencias entre estimaciones realizadas con los distintosmodelos. También se describe un estimador para el peso promedio de peces en la captura y su varianza. Se usan estos estimadores para calcular estimaciones mejoradas de las capturas anuales totales de aleta amarilla tomadas del Océano Pacífico oriental, al este de 150°W, entre 1955 y 1990. (PDF contains 41 pages.)