134 resultados para EPCglobal Network Standards
Resumo:
Background: Choosing an adequate measurement instrument depends on the proposed use of the instrument, the concept to be measured, the measurement properties (e.g. internal consistency, reproducibility, content and construct validity, responsiveness, and interpretability), the requirements, the burden for subjects, and costs of the available instruments. As far as measurement properties are concerned, there are no sufficiently specific standards for the evaluation of measurement properties of instruments to measure health status, and also no explicit criteria for what constitutes good measurement properties. In this paper we describe the protocol for the COSMIN study, the objective of which is to develop a checklist that contains COnsensus-based Standards for the selection of health Measurement INstruments, including explicit criteria for satisfying these standards. We will focus on evaluative health related patient-reported outcomes (HR-PROs), i.e. patient-reported health measurement instruments used in a longitudinal design as an outcome measure, excluding health care related PROs, such as satisfaction with care or adherence. The COSMIN standards will be made available in the form of an easily applicable checklist.Method: An international Delphi study will be performed to reach consensus on which and how measurement properties should be assessed, and on criteria for good measurement properties. Two sources of input will be used for the Delphi study: (1) a systematic review of properties, standards and criteria of measurement properties found in systematic reviews of measurement instruments, and (2) an additional literature search of methodological articles presenting a comprehensive checklist of standards and criteria. The Delphi study will consist of four (written) Delphi rounds, with approximately 30 expert panel members with different backgrounds in clinical medicine, biostatistics, psychology, and epidemiology. The final checklist will subsequently be field-tested by assessing the inter-rater reproducibility of the checklist.Discussion: Since the study will mainly be anonymous, problems that are commonly encountered in face-to-face group meetings, such as the dominance of certain persons in the communication process, will be avoided. By performing a Delphi study and involving many experts, the likelihood that the checklist will have sufficient credibility to be accepted and implemented will increase.
Resumo:
It is well known that multiple-input multiple-output (MIMO) techniques can bring numerous benefits, such as higher spectral efficiency, to point-to-point wireless links. More recently, there has been interest in extending MIMO concepts tomultiuser wireless systems. Our focus in this paper is on network MIMO, a family of techniques whereby each end user in a wireless access network is served through several access points within its range of influence. By tightly coordinating the transmission and reception of signals at multiple access points, network MIMO can transcend the limits on spectral efficiency imposed by cochannel interference. Taking prior information-theoretic analyses of networkMIMO to the next level, we quantify the spectral efficiency gains obtainable under realistic propagation and operational conditions in a typical indoor deployment. Our study relies on detailed simulations and, for specificity, is conducted largely within the physical-layer framework of the IEEE 802.16e Mobile WiMAX system. Furthermore,to facilitate the coordination between access points, we assume that a high-capacity local area network, such as Gigabit Ethernet,connects all the access points. Our results confirm that network MIMO stands to provide a multiple-fold increase in spectralefficiency under these conditions.
Resumo:
This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.
Resumo:
The article examines the structure of the collaboration networks of research groups where Slovenian and Spanish PhD students are pursuing their doctorate. The units of analysis are student-supervisor dyads. We use duocentred networks, a novel network structure appropriate for networks which are centred around a dyad. A cluster analysis reveals three typical clusters of research groups. Those which are large and belong to several institutions are labelled under a bridging social capital label. Those which are small, centred in a single institution but have high cohesion are labelled as bonding social capital. Those which are small and with low cohesion are called weak social capital groups. Academic performance of both PhD students and supervisors are highest in bridging groups and lowest in weak groups. Other variables are also found to differ according to the type of research group. At the end, some recommendations regarding academic and research policy are drawn
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.
Resumo:
The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.
Resumo:
This paper argues that in the presence of intersectoral input-output linkages, microeconomicidiosyncratic shocks may lead to aggregate fluctuations. In particular, itshows that, as the economy becomes more disaggregated, the rate at which aggregatevolatility decays is determined by the structure of the network capturing such linkages.Our main results provide a characterization of this relationship in terms of the importanceof different sectors as suppliers to their immediate customers as well as theirrole as indirect suppliers to chains of downstream sectors. Such higher-order interconnectionscapture the possibility of "cascade effects" whereby productivity shocks to asector propagate not only to its immediate downstream customers, but also indirectlyto the rest of the economy. Our results highlight that sizable aggregate volatility isobtained from sectoral idiosyncratic shocks only if there exists significant asymmetryin the roles that sectors play as suppliers to others, and that the "sparseness" of theinput-output matrix is unrelated to the nature of aggregate fluctuations.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
This paper analyzes the flow of intermediate inputs across sectors by adopting a network perspective on sectoral interactions. I apply these tools to show how fluctuationsin aggregate economic activity can be obtained from independent shocks to individualsectors. First, I characterize the network structure of input trade in the U.S. On thedemand side, a typical sector relies on a small number of key inputs and sectors arehomogeneous in this respect. However, in their role as input-suppliers sectors do differ:many specialized input suppliers coexist alongside general purpose sectors functioningas hubs to the economy. I then develop a model of intersectoral linkages that can reproduce these connectivity features. In a standard multisector setup, I use this modelto provide analytical expressions linking aggregate volatility to the network structureof input trade. I show that the presence of sectoral hubs - by coupling productiondecisions across sectors - leads to fluctuations in aggregates.
Resumo:
In this paper a p--median--like model is formulated to address theissue of locating new facilities when there is uncertainty. Severalpossible future scenarios with respect to demand and/or the travel times/distanceparameters are presented. The planner will want a strategy of positioning thatwill do as ``well as possible'' over the future scenarios. This paper presents a discrete location model formulation to address this P--Medianproblem under uncertainty. The model is applied to the location of firestations in Barcelona.
Resumo:
There is a gap between the importance given to accounting and the low level of bookkeeping and accounting practice in the agricultural sector. Current general accounting rules do not adapt very well to the particularities of farming and are difficult and expensive to implement. The Farm Accountancy Data Network (FADN) and IASC's Proposed International Accounting Standard on Agriculture (PIASA) could be key elements to improve the use of accounting in European farms. The PIASA provides a strong conceptual framework but might need further instruments for its implementation in practice. FADN is an experienced network that has elaborated very detailed farm accounting procedures. Empirical data indicate that current FADN reports are already considered useful by farmers for different purposes. Some changes in the FADN procedures are suggested, while some aspects of FADN are worthwhile for the future IAS on agriculture.
Resumo:
We argue the importance both of developing simple sufficientconditions for the stability of general multiclass queueing networks and also of assessing such conditions under a range of assumptions on the weight of the traffic flowing between service stations. To achieve the former, we review a peak-rate stability condition and extend its range of application and for the latter, we introduce a generalisation of the Lu-Kumar network on which the stability condition may be tested for a range of traffic configurations. The peak-rate condition is close to exact when the between-station traffic is light, but degrades as this traffic increases.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.