44 resultados para Stair Nested Designs
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.
Resumo:
In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprising ABAB and multiple baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.
Resumo:
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments.
Resumo:
Prevention has been a main issue of recent policy orientations in health care. This renews the interest on how different organizational designs and the definition of payment schemes to providers may affect the incentives to provide preventive health care. We present, both the normative and the positive analyses of the change from independent providers to integrated services. We show the evaluation of that change to depend on the particular way payment to providers is done. We focus on the externality resulting from referral decisions from primary to acute care providers. This makes our analysis complementary to most works in the literature allowing to address in a more direct way the issue of preventive health care.
Resumo:
The decisions of many individuals and social groups, taking according to well-defined objectives, are causing serious social and environmental problems, in spite of following the dictates of economic rationality. There are many examples of serious problems for which there are not yet appropriate solutions, such as management of scarce natural resources including aquifer water or the distribution of space among incompatible uses. In order to solve these problems, the paper first characterizes the resources and goods involved from an economic perspective. Then, for each case, the paper notes that there is a serious divergence between individual and collective interests and, where possible, it designs the procedure for solving the conflict of interests. With this procedure, the real opportunities for the application of economic theory are shown, and especially the theory on collective goods and externalities. The limitations of conventional economic analysis are shown and the opportunity to correct the shortfalls is examined. Many environmental problems, such as climate change, have an impact on different generations that do not participate in present decisions. The paper shows that for these cases, the solutions suggested by economic theory are not valid. Furthermore, conventional methods of economic valuation (which usually help decision-makers) are unable to account for the existence of different generations and tend to obviate long-term impacts. The paper analyzes how economic valuation methods could account for the costs and benefits enjoyed by present and future generations. The paper studies an appropriate consideration of preferences for future consumption and the incorporation of sustainability as a requirement in social decisions, which implies not only more efficiency but also a fairer distribution between generations than the one implied by conventional economic analysis.
Resumo:
En aquest projecte s'ha realitzat una ampliació del llenguatge de protecció d'itineraris d'agents mòbils i la seva implementació per tal d'extreure'n les tasques locals. Aquestes tasques es troben encriptades dintre de l'itinerari de l'agent i descriuen el comportament que tindrà, així que és necessària una classe que les sàpiga extreure. El nostre compilador crea aquesta classe a partir de l'arquitectura d'extracció. L'hem provat amb les arquitectures simple i nested i els resultats amb itineraris reals han estat els esperats. A la pràctica, aquest projecte completa una mica més l'entorn de desenvolupament integrat d'agents mòbils MARISM-A.
Resumo:
L’objectiu de la recerca és definir un marc teòric i metodològic per a l’estudi del canvi tecnològic en Arqueologia. Aquest model posa èmfasi en caracteritzar els compromisos que configuren una tecnologia i avaluar-los en funció dels factors de situació —tècnics, econòmics, polítics, socials i ideològics. S’ha aplicat aquest model a un cas d’estudi concret: la producció d’àmfores romanes durant el canvi d’Era en la província Tarraconensis. L’estudi tecnològic dels envasos s’ha realitzat mitjançant diverses tècniques analítiques: Fluorescència de raigs X (FRX), Difracció de raigs X (DRX), Microscòpia òptica (MO) i Microscòpia electrònica de rastreig (MER). Les dades obtingudes permeten, a més, establir els grups de referència per a cada centre productor d’àmfores i, així, identificar la provinença dels individus recuperats en els centres consumidors. Donat que les àmfores en estudi són artefactes dissenyats específicament per a ser estibats en una nau i servir com a envàs de transport, l’estudi inclou la caracterització de les propietats mecàniques de resistència a la fractura i de tenacitat. En aquest sentit, i per primera vegada, s’ha aplicat l’Anàlisi d’Elements Finits (AEF) per a conèixer el comportament dels diferents dissenys d’àmfora en ésser sotmesos a diverses forces d’ús. L’AEF permet simular per ordinador les activitats en què les àmfores haurien participat durant el seu ús i avaluar-ne el seu comportament tècnic. Els resultats mostren una gran adequació entre les formulacions teòriques i el programa analític implementat per a aquest estudi. Respecte el cas d’estudi, els resultats mostren una gran variabilitat en les eleccions tecnològiques preses pels ceramistes de diferents tallers, però també al llarg del període de funcionament d’un mateix taller. L’aplicació del model ha permès proposar una explicació al canvi de disseny de les àmfores romanes.
Resumo:
Projecte de recerca elaborat a partir d’una estada al Laboratory of Archaeometry del National Centre of Scientific Research “Demokritos” d’Atenes, Grècia, entre juny i setembre 2006. Aquest estudi s’emmarca dins d’un context més ampli d’estudi del canvi tecnològic que es documenta en la producció d’àmfores de tipologia romana durant els segles I aC i I dC en els territoris costaners de Catalunya. Una part d’aquest estudi contempla el càlcul de les propietats mecàniques d’aquestes àmfores i la seva avaluació en funció de la tipologia amforal, a partir de l’Anàlisi d’Elements Finits (AEF). L’AEF és una aproximació numèrica que té el seu origen en les ciències d’enginyeria i que ha estat emprada per estimar el comportament mecànic d’un model en termes, per exemple, de deformació i estrès. Així, un objecte, o millor dit el seu model, es dividit en sub-dominis anomenats elements finits, als quals se’ls atribueixen les propietats mecàniques del material en estudi. Aquests elements finits estan connectats formant una xarxa amb constriccions que pot ser definida. En el cas d’aplicar una força determinada a un model, el comportament de l’objecte pot ser estimat mitjançant el conjunt d’equacions lineals que defineixen el rendiment dels elements finits, proporcionant una bona aproximació per a la descripció de la deformació estructural. Així, aquesta simulació per ordinador suposa una important eina per entendre la funcionalitat de ceràmiques arqueològiques. Aquest procediment representa un model quantitatiu per predir el trencament de l’objecte ceràmic quan aquest és sotmès a diferents condicions de pressió. Aquest model ha estat aplicat a diferents tipologies amforals. Els resultats preliminars mostren diferències significatives entre la tipologia pre-romana i les tipologies romanes, així com entre els mateixos dissenys amforals romans, d’importants implicacions arqueològiques.
Resumo:
This paper analyzes the delegation of contracting capacity in a moral hazard environment with sequential production in a project which involves a principal and two agents. The agent in charge of the nal production can obtain soft information about the other agent's effort choice by investing in monitoring. I investigate the circumstances under which it is optimal for the principal to use a centralized organization in which she designs the contracts with both agents or to use a decentralized organization in which she contracts only one agent, and delegates the power to contract the other agent. It is shown that in this setting a decentralized organization can be superior to a centralized organization. This is because the principal is better off under monitoring and the incentives for an agent to invest in monitoring can be higher in a decentralized organization. The circumstances under which this is true are related to the monitoring costs and the importance of each agent for production. The results explain the recent application of the design-build method in public procurement. Journal of Economic Literature Classi cation Numbers: D23, D82, L14, L22. Keywords: Decentralization of Contracting, Monitoring, Moral Hazard.
Resumo:
L’aparició d’un nou paradigma per al disseny de sistemes multiprocessador, les NoC; requereixen una manera d’adaptar els IP cores ja existents i permetre la seva connexió en xarxa. Aquest projecte presenta un disseny d’una interfície que aconsegueix adaptar un IP core existent, el LEON3; del protocol del bus AMBA al protocol de la xarxa. D’aquesta manera i basant-nos en idees d’interfícies discutides en l’estat de l’art, aconseguim desacoblar el processador del disseny i topologia de la xarxa.
Resumo:
The goal of this paper is to reexamine the optimal design and efficiency of loyalty rewards in markets for final consumption goods. While the literature has emphasized the role of loyalty rewards as endogenous switching costs (which distort the efficient allocation of consumers), in this paper I analyze the ability of alternative designs to foster consumer participation and increase total surplus. First, the efficiency of loyalty rewards depend on their specific design. A commitment to the price of repeat purchases can involve substantial efficiency gains by reducing price-cost margins. However, discount policies imply higher future regular prices and are likely to reduce total surplus. Second, firms may prefer to set up inefficient rewards (discounts), especially in those circumstances where a commitment to the price of repeat purchases triggers Coasian dynamics.
Resumo:
We show that standard expenditure multipliers capture economy-wide effects of new government projects only when financing constraints are not binding. In actual policy making, however, new projects usually need financing. Under liquidity constraints, new projects are subject to two opposite effects: an income effect and a set of spending substitution effects. The former is the traditional, unrestricted, multiplier effect; the latter is the result of expenditure reallocation to upheld effective financing constraints. Unrestricted multipliers will therefore be, as a general rule, upward biased and policy designs based upon them should be reassessed in the light of the countervailing substitution effects.