976 resultados para fair value hierarchy
Resumo:
Attributing a dollar value to a keyword is an essential part of running any profitable search engine advertising campaign. When an advertiser has complete control over the interaction with and monetization of each user arriving on a given keyword, the value of that term can be accurately tracked. However, in many instances, the advertiser may monetize arrivals indirectly through one or more third parties. In such cases, it is typical for the third party to provide only coarse-grained reporting: rather than report each monetization event, users are aggregated into larger channels and the third party reports aggregate information such as total daily revenue for each channel. Examples of third parties that use channels include Amazon and Google AdSense. In such scenarios, the number of channels is generally much smaller than the number of keywords whose value per click (VPC) we wish to learn. However, the advertiser has flexibility as to how to assign keywords to channels over time. We introduce the channelization problem: how do we adaptively assign keywords to channels over the course of multiple days to quickly obtain accurate VPC estimates of all keywords? We relate this problem to classical results in weighing design, devise new adaptive algorithms for this problem, and quantify the performance of these algorithms experimentally. Our results demonstrate that adaptive weighing designs that exploit statistics of term frequency, variability in VPCs across keywords, and flexible channel assignments over time provide the best estimators of keyword VPCs.
Resumo:
It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation.
Resumo:
In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.
Resumo:
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.
Resumo:
We present a procedure to infer a typing for an arbitrary λ-term M in an intersection-type system that translates into exactly the call-by-name (resp., call-by-value) evaluation of M. Our framework is the recently developed System E which augments intersection types with expansion variables. The inferred typing for M is obtained by setting up a unification problem involving both type variables and expansion variables, which we solve with a confluent rewrite system. The inference procedure is compositional in the sense that typings for different program components can be inferred in any order, and without knowledge of the definition of other program components. Using expansion variables lets us achieve a compositional inference procedure easily. Termination of the procedure is generally undecidable. The procedure terminates and returns a typing if the input M is normalizing according to call-by-name (resp., call-by-value). The inferred typing is exact in the sense that the exact call-by-name (resp., call-by-value) behaviour of M can be obtained by a (polynomial) transformation of the typing. The inferred typing is also principal in the sense that any other typing that translates the call-by-name (resp., call-by-value) evaluation of M can be obtained from the inferred typing for M using a substitution-based transformation.
Resumo:
Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal Values Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.
Resumo:
Most associative memory models perform one level mapping between predefined sets of input and output patterns1 and are unable to represent hierarchical knowledge. Complex AI systems allow hierarchical representation of concepts, but generally do not have learning capabilities. In this paper, a memory model is proposed which forms concept hierarchy by learning sample relations between concepts. All concepts are represented in a concept layer. Relations between a concept and its defining lower level concepts, are chunked as cognitive codes represented in a coding layer. By updating memory contents in the concept layer through code firing in the coding layer, the system is able to perform an important class of commonsense reasoning, namely recognition and inheritance.
Resumo:
The central research question that this thesis addresses is whether there is a significant gap between fishery stakeholder values and the principles and policy goals implicit in an Ecosystem Approach to Fisheries Management (EAFM). The implications of such a gap for fisheries governance are explored. Furthermore an assessment is made of what may be practically achievable in the implementation of an EAFM in fisheries in general and in a case study fishery in particular. The research was mainly focused on a particular case study, the Celtic Sea Herring fishery and its management committee, the Celtic Sea Herring Management Advisory Committee (CSHMAC). The Celtic Sea Herring fishery exhibits many aspects of an EAFM and the fish stock has successfully recovered to healthy levels in the past 5 years. However there are increasing levels of governance related conflict within the fishery which threaten the future sustainability of the stock. Previous research on EAFM governance has tended to focus either on higher levels of EAFM governance or on individual behaviour but very little research has attempted to link the two spheres or explore the relationship between them. Two main themes within this study aimed to address this gap. The first was what role governance could play in facilitating EAFM implementation. The second theme concerned the degree of convergence between high-level EAFM goals and stakeholder values. The first method applied was governance benchmarking to analyse systemic risks to EAFM implementation. This found that there are no real EU or national level policies which provide stakeholders or managers with clear targets for EAFM implementation. The second method applied was the use of cognitive mapping to explore stakeholders understandings of the main ecological, economic and institutional driving forces in the Celtic Sea Herring fishery. The main finding from this was that a long-term outlook can and has been incentivised through a combination of policy drivers and participatory management. However the fundamental principle of EAFM, accounting for ecosystem linkages rather than target stocks was not reflected in stakeholders cognitive maps. This was confirmed in a prioritisation of stakeholders management priorities using Analytic Hierarchy Process which found that the overriding concern is for protection of target stock status but that wider ecosystem health was not a priority for most management participants. The conclusion reached is that moving to sustainable fisheries may be a more complex process than envisioned in much of the literature and may consist of two phases. The first phase is a transition to a long-term but still target stock focused approach. This achievable transition is mainly a strategic change, which can be incentivised by policies and supported by stakeholders. In the Celtic Sea Herring fishery, and an increasing number of global and European fisheries, such transitions have contributed to successful stock recoveries. The second phase however, implementation of an ecosystem approach, may present a greater challenge in terms of governability, as this research highlights some fundamental conflicts between stakeholder perceptions and values and those inherent in an EAFM. This phase may involve the setting aside of fish for non-valued ecosystem elements and will require either a pronounced mind-set and value change or some strong top-down policy incentives in order to succeed. Fisheries governance frameworks will need to carefully explore the most effective balance between such endogenous and exogenous solutions. This finding of low prioritisation of wider ecosystem elements has implications for rights based management within an ecosystem approach, regardless of whether those rights are individual or collective.
Resumo:
The pervasive use of mobile technologies has provided new opportunities for organisations to achieve competitive advantage by using a value network of partners to create value for multiple users. The delivery of a mobile payment (m-payment) system is an example of a value network as it requires the collaboration of multiple partners from diverse industries, each bringing their own expertise, motivations and expectations. Consequently, managing partnerships has been identified as a core competence required by organisations to form viable partnerships in an m-payment value network and an important factor in determining the sustainability of an m-payment business model. However, there is evidence that organisations lack this competence which has been witnessed in the m-payment domain where it has been attributed as an influencing factor in a number of failed m-payment initiatives since 2000. In response to this organisational deficiency, this research project leverages the use of design thinking and visualisation tools to enhance communication and understanding between managers who are responsible for managing partnerships within the m-payment domain. By adopting a design science research approach, which is a problem solving paradigm, the research builds and evaluates a visualisation tool in the form of a Partnership Management Canvas. In doing so, this study demonstrates that when organisations encourage their managers to adopt design thinking, as a way to balance their analytical thinking and intuitive thinking, communication and understanding between the partners increases. This can lead to a shared understanding and a shared commitment between the partners. In addition, the research identifies a number of key business model design issues that need to be considered by researchers and practitioners when designing an m-payment business model. As an applied research project, the study makes valuable contributions to the knowledge base and to the practice of management.
Resumo:
This work illustrates the influence of wind forecast errors on system costs, wind curtailment and generator dispatch in a system with high wind penetration. Realistic wind forecasts of different specified accuracy levels are created using an auto-regressive moving average model and these are then used in the creation of day-ahead unit commitment schedules. The schedules are generated for a model of the 2020 Irish electricity system with 33% wind penetration using both stochastic and deterministic approaches. Improvements in wind forecast accuracy are demonstrated to deliver: (i) clear savings in total system costs for deterministic and, to a lesser extent, stochastic scheduling; (ii) a decrease in the level of wind curtailment, with close agreement between stochastic and deterministic scheduling; and (iii) a decrease in the dispatch of open cycle gas turbine generation, evident with deterministic, and to a lesser extent, with stochastic scheduling.
Resumo:
This research investigates whether a reconfiguration of maternity services, which collocates consultant- and midwifery-led care, reflects demand and value for money in Ireland. Qualitative and quantitative research is undertaken to investigate demand and an economic evaluation is performed to evaluate the costs and benefits of the different models of care. Qualitative research is undertaken to identify women’s motivations when choosing place of delivery. These data are further used to inform two stated preference techniques: a discrete choice experiment (DCE) and contingent valuation method (CVM). These are employed to identify women’s strengths of preferences for different features of care (DCE) and estimate women’s willingness to pay for maternity care (CVM), which is used to inform a cost-benefit analysis (CBA) on consultant- and midwifery-led care. The qualitative research suggests women do not have a clear preference for consultant or midwifery-led care, but rather a hybrid model of care which closely resembles the Domiciliary Care In and Out of Hospital (DOMINO) scheme. Women’s primary concern during care is safety, meaning women would only utilise midwifery-led care when co-located with consultant-led care. The DCE also finds women’s preferred package of care closely mirrors the DOMINO scheme with 39% of women expected to utilise this service. Consultant- and midwifery-led care would then be utilised by 34% and 27% of women, respectively. The CVM supports this hierarchy of preferences where consultant-led care is consistently valued more than midwifery-led care – women are willing to pay €956.03 for consultant-led care and €808.33 for midwifery-led care. A package of care for a woman availing of consultant- and midwifery-led care is estimated to cost €1,102.72 and €682.49, respectively. The CBA suggests both models of care are cost-beneficial and should be pursued in Ireland. This reconfiguration of maternity services would maximise women’s utility, while fulfilling important objectives of key government policy.
Resumo:
Creativity is often defined as developing something novel or new, that fits its context, and has value. To achieve this, the creative process itself has gained increasing attention as organizational leaders seek competitive advantages through developing new products, services, process, or business models. In this paper, we explore the notion of the creative process as including a series of “filters” or ways to process information as being a critical component of the creative process. We use the metaphor of coffee making and filters because many of our examples come from Vietnam, which is one of the world’s top coffee exporters and which has created a coffee culture rivaling many other countries. We begin with a brief review of the creative process its connection to information processing, propose a tentative framework for integrating the two ideas, and provide examples of how it might work. We close with implications for further practical and theoretical directions for this idea.
Resumo:
This paper challenges the common assumption that economic agents know their tastes. After reviewing previous research showing that valuation of ordinary products and experiences can be manipulated by non-normative cues, we present three studies showing that in some cases people do not have a pre-existing sense of whether an experience is good or bad-even when they have experienced a sample of it. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Humans make decisions in highly complex physical, economic and social environments. In order to adaptively choose, the human brain has to learn about- and attend to- sensory cues that provide information about the potential outcome of different courses of action. Here I present three event-related potential (ERP) studies, in which I evaluated the role of the interactions between attention and reward learning in economic decision-making. I focused my analyses on three ERP components (Chap. 1): (1) the N2pc, an early lateralized ERP response reflecting the lateralized focus of visual; (2) the feedback-related negativity (FRN), which reflects the process by which the brain extracts utility from feedback; and (3) the P300 (P3), which reflects the amount of attention devoted to feedback-processing. I found that learned stimulus-reward associations can influence the rapid allocation of attention (N2pc) towards outcome-predicting cues, and that differences in this attention allocation process are associated with individual differences in economic decision performance (Chap. 2). Such individual differences were also linked to differences in neural responses reflecting the amount of attention devoted to processing monetary outcomes (P3) (Chap. 3). Finally, the relative amount of attention devoted to processing rewards for oneself versus others (as reflected by the P3) predicted both charitable giving and self-reported engagement in real-life altruistic behaviors across individuals (Chap. 4). Overall, these findings indicate that attention and reward processing interact and can influence each other in the brain. Moreover, they indicate that individual differences in economic choice behavior are associated both with biases in the manner in which attention is drawn towards sensory cues that inform subsequent choices, and with biases in the way that attention is allocated to learn from the outcomes of recent choices.
Resumo:
To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.