996 resultados para Instrument selection
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.
Resumo:
The paper addresses the question of which factors drive the formation of policy preferences when there are remaining uncertainties about the causes and effects of the problem at stake. To answer this question we examine policy preferences reducing aquatic micropollutants, a specific case of water protection policy and different actor groups (e.g. state, science, target groups). Here, we contrast two types of policy preferences: a) preventive or source-directed policies, which mitigate pollution in order to avoid contact with water; and b) reactive or end-of-pipe policies, which filter water already contaminated by pollutants. In a second step, we analyze the drivers for actors’ policy preferences by focusing on three sets of explanations, i.e. participation, affectedness and international collaborations. The analysis of our survey data, qualitative interviews and regression analysis of the Swiss political elite show that participation in the policy-making process leads to knowledge exchange and reduces uncertainties about the policy problem, which promotes preferences for preventive policies. Likewise, actors who are affected by the consequences of micropollutants, such as consumer or environmental associations, opt for anticipatory policies. Interestingly, we find that uncertainties about the effectiveness of preventive policies can promote preferences for end-of-pipe policies. While preventive measures often rely on (uncertain) behavioral changes of target groups, reactive policies are more reliable when it comes to fulfilling defined policy goals. Finally, we find that in a transboundary water management context, actors with international collaborations prefer policies that produce immediate and reliable outcomes.
Resumo:
A wide range of metrology processes are involved in the manufacture of large products. In addition to the traditional tool-setting and product-verification operations, increasingly flexible metrology-enabled automation is also being used. Faced with many possible measurement problems and a very large number of metrology instruments employing diverse technologies, the selection of the appropriate instrument for a given task can be highly complex. Also, as metrology has become a key manufacturing process, it should be considered in the early stages of design, and there is currently very little research to support this. This paper provides an overview of the important selection criteria for typical measurement processes and presents some novel selection strategies. Metrics that can be used to assess measurability are also discussed. A prototype instrument selection and measurability analysis application is also presented, with discussion of how this can be used as the basis for development of a more sophisticated measurement planning tool. © 2010 Authors.
Resumo:
Metrology processes used in the manufacture of large products include tool setting, product verification and flexible metrology enabled automation. The range of applications and instruments available makes the selection of the appropriate instrument for a given task highly complex. Since metrology is a key manufacturing process it should be considered in the early stages of design. This paper provides an overview of the important selection criteria for typical measurement processes and presents some novel selection strategies. Metrics which can be used to assess measurability are also discussed. A prototype instrument selection and measurability analysis application is presented with discussion of how this can be used as the basis for development of a more sophisticated measurement planning tool. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
The goal of this paper is to provide clinicians and researchers, who may not be experts in psychometrics, with a guide for the selection and adaptation of an instrument for clinical research. Issues related to the concept to be measured, the targeted clientele, the selection criteria for the instrument (algorithm), the strategies for translation and adaptation, as well as potential bias related to the administration of an instrument are reviewed and discussed.
Resumo:
Collaborative efforts between the Neutronics and Target Design Group at the Instituto de Fusión Nuclear and the Molecular Spectroscopy Group at the ISIS Pulsed Neutron and Muon Source date back to 2012 in the context of the ESS-Bilbao project. The rationale for these joint activities was twofold, namely: to assess the realm of applicability of the low-energy neutron source proposed by ESS-Bilbao - for details; and to explore instrument capabilities for pulsed-neutron techniques in the range 0.05-3 ms, a time range where ESS-Bilbao and ISIS could offer a significant degree of synergy and complementarity. As part of this collaboration, J.P. de Vicente has spent a three-month period within the ISIS Molecular Spectroscopy Group, to gain hands-on experience on the practical aspects of neutron-instrument design and the requisite neutron-transport simulations. To date, these activities have resulted in a joint MEng thesis as well as a number of publications and contributions to national and international conferences. Building upon these previous works, the primary aim of this report is to provide a self-contained discussion of general criteria for instrument selection at ESS-Bilbao, the first accelerator-driven, low-energy neutron source designed in Spain. To this end, Chapter 1 provides a brief overview of the current design parameters of the accelerator and target station. Neutron moderation is covered in Chapter 2, where we take a closer look at two possible target-moderator-reflector configurations and pay special attention to the spectral and temporal characteristics of the resulting neutron pulses. This discussion provides a necessary starting point to assess the operation of ESSB in short- and long-pulse modes. These considerations are further explored in Chapter 3, dealing with the primary characteristics of ESS-Bilbao as a short- or long-pulse facility in terms of accessible dynamic range and spectral resolution. Other practical aspects including background suppression and the use of fast choppers are also discussed. The guiding principles introduced in the first three chapters are put to use in Chapter 4 where we analyse in some detail the capabilities of a small-angle scattering instrument, as well as how specific scientific requirements can be mapped onto the optimal use of ESS-Bilbao for condensed-matter research. Part 2 of the report contains additional supporting documentation, including a description of the ESSB McStas component, a detailed characterisation of moderator response and neutron pulses, and estimates ofparameters associated with the design and operation of neutron choppers. In closing this brief foreword, we wish to thank both ESS-Bilbao and ISIS for their continuing encouragement and support along the way.
Resumo:
Background: Patients who play musical instruments (especially wind and stringed instruments) and vocalists are prone to particular types of orofacial problems. Some problems are caused by playing and some are the result of dental treatment. This paper proposes to give an insight into these problems and practical guidance to general practice dentists. Method: Information in this paper is gathered from studies published in dental, music and occupational health journals, and from discussions with career musicians and music teachers. Results: Orthodontic problems, soft tissue trauma, focal dystonia, denture retention, herpes labialis, dry mouth and temporomandibular joint (TMJ) disorders were identified as orofacial problems of career musicians. Options available for prevention and palliative treatment as well as instrument selection are suggested to overcome these problems. Conclusions: Career musicians express reluctance to attend dentists who are not sensitive to their specific needs. General practitioner dentists who understand how the instruments impact on the orofacial structures and are aware of potential problems faced by musicians are able to offer preventive advice and supportive treatment to these patients, especially those in the early stages of their career.
Resumo:
This study deals with cognitive competences and abilities that are relevant to selection and education regarding Information Technology (IT). These competences relate to problem solving, decision making, and practical intelligence that regard scholar and extracurricular knowledge mobilization. The research aimed to contribute for the improvement of a selection instrument, consisting of five arrays of skills (dealing with objectives and prospection), as well as the development and comprehension of those skills that are involved in IT education. This is done by means of an analysis on the selection instrument used in the first selective process that occurred at Metropole Digital an Institute at the Federal University of Rio Grande do Norte in Brazil. This was evaluated aiming to acknowledge IT education (with basic training and emphasis on Web programming and electronics). The methodology used was of quantitative method involving performance scores relating education delivery. An Anova analysis of variance was done along with descriptive analysis involving socioeconomic data that was not observed in the meaningful relations between parental instruction and student performance in the graduate course. These analyses were able to point out the importance and need of the policies for vacancy reservation on behalf of public school students. A Spearman correlation analysis was done considering the instrument selection performance in the training course. The instrument is presented as a predictor that is significantly moderate and presents a good performance in the course as a whole. A Cluster and Regression analysis was also realized in the process. The first analysis allowed finding performance groups (Clusters) that ranged from medium and inferior. The regression analysis was able to point out association amongst criterion variables and the (average performance in basic and advanced modules) and explanatory (five matrixes). Regression analysis indicated that matrix 1 and matrix 3 were pointed out as being the strongest ones. In all the above analysis, the correlation between the instrument and the course was considered moderate. Thus this can be related in some of the aspects present in the course such as emphasis on evaluation itself as well as in technical contents and practical skills (educational ones) and competences and selection skills. It is known that the mediation of technological artifact in cultural context can foster the development of skills and abilities relevant to IT training. This study provides subsidies to reflect on the adoption of selection instrument and IT training in the Institute. Thus the research offers means to achieve a interdisciplinary discussion and enriching of areas such as Psychology and Information Technology; all of which regarding competencies and skills relevant in IT training
Resumo:
This paper describes how dimensional variation management could be integrated throughout design, manufacture and verification, to improve quality while reducing cycle times and manufacturing cost in the Digital Factory environment. Initially variation analysis is used to optimize tolerances during product and tooling design and also results in the creation of a simplified representation of product key characteristics. This simplified representation can then be used to carry out measurability analysis and process simulation. The link established between the variation analysis model and measurement processes can subsequently be used throughout the production process to automatically update the variation analysis model in real time with measurement data. This ‘live’ simulation of variation during manufacture will allow early detection of quality issues and facilitate autonomous measurement assisted processes such as predictive shimming. A study is described showing how these principles can be demonstrated using commercially available software combined with a number of prototype applications operating as discrete modules. The commercially available modules include Catia/Delmia for product and process design, 3DCS for variation analysis and Spatial Analyzer for measurement simulation. Prototype modules are used to carry out measurability analysis and instrument selection. Realizing the full potential of Metrology in the Digital Factory will require that these modules are integrated and software architecture to facilitate this is described. Crucially this integration must facilitate the use of realtime metrology data describing the emerging assembly to update the digital model.
Resumo:
There are many steps involved in developing a drug candidate into a formulated medicine and many involve analysis of chemical interaction or physical change. Calorimetry is particularly suited to such analyses as it offers the capacity to observe and quantify both chemical and physical changes in virtually any sample. Differential scanning calorimetry (DSC) is ubiquitous in pharmaceutical development, but the related technique of isothermal calorimetry (IC) is complementary and can be used to investigate a range of processes not amenable to analysis by DSC. Typically, IC is used for longer-term stability indicating or excipient compatibility assays because both the temperature and relative humidity (RH) in the sample ampoule can be controlled. However, instrument design and configuration, such as titration, gas perfusion or ampoule-breaking (solution) calorimetry, allow quantification of more specific values, such as binding enthalpies, heats of solution and quantification of amorphous content. As ever, instrument selection, experiment design and sample preparation are critical to ensuring the relevance of any data recorded. This article reviews the use of isothermal, titration, gas-perfusion and solution calorimetry in the context of pharmaceutical development, with a focus on instrument and experimental design factors, highlighted with examples from the recent literature. © 2011 Elsevier B.V.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Background: Choosing an adequate measurement instrument depends on the proposed use of the instrument, the concept to be measured, the measurement properties (e.g. internal consistency, reproducibility, content and construct validity, responsiveness, and interpretability), the requirements, the burden for subjects, and costs of the available instruments. As far as measurement properties are concerned, there are no sufficiently specific standards for the evaluation of measurement properties of instruments to measure health status, and also no explicit criteria for what constitutes good measurement properties. In this paper we describe the protocol for the COSMIN study, the objective of which is to develop a checklist that contains COnsensus-based Standards for the selection of health Measurement INstruments, including explicit criteria for satisfying these standards. We will focus on evaluative health related patient-reported outcomes (HR-PROs), i.e. patient-reported health measurement instruments used in a longitudinal design as an outcome measure, excluding health care related PROs, such as satisfaction with care or adherence. The COSMIN standards will be made available in the form of an easily applicable checklist.Method: An international Delphi study will be performed to reach consensus on which and how measurement properties should be assessed, and on criteria for good measurement properties. Two sources of input will be used for the Delphi study: (1) a systematic review of properties, standards and criteria of measurement properties found in systematic reviews of measurement instruments, and (2) an additional literature search of methodological articles presenting a comprehensive checklist of standards and criteria. The Delphi study will consist of four (written) Delphi rounds, with approximately 30 expert panel members with different backgrounds in clinical medicine, biostatistics, psychology, and epidemiology. The final checklist will subsequently be field-tested by assessing the inter-rater reproducibility of the checklist.Discussion: Since the study will mainly be anonymous, problems that are commonly encountered in face-to-face group meetings, such as the dominance of certain persons in the communication process, will be avoided. By performing a Delphi study and involving many experts, the likelihood that the checklist will have sufficient credibility to be accepted and implemented will increase.
Resumo:
This study \Alas initiated in response to the Junior Division Review (1985) publ ished by the Ministry of Education for the Province of Ontario. Curriculum integration is an element used within the educational paradigm designed by the Ontario Ministry of Education. It is a term frequent1y verbal ized b>' educators in this province, but because of 1 imi ted resource support regarding this methodology, it was open to broad interpretation resulting in an extreme v ar i at i on i nit simp 1 eme n tat i on • I n de ed, the Min i s try intimated that it was not occurring to any significant degree across the province. The objective of this thes is was· to define integration in the junior classroom and de-:.ign a meas.ur·ement in-:.tr-ument which would in turn high 1 i gh t indicators of curriculum integration. The :.tudy made a prel iminary, field-based survey of educa tiona 1 professionals in order to generate a relevant description of integrated curr-iculum programm i ng as def i ned in the j un i or classroom. The description was a compilation of views expressed by a random selection of teachers, consultants, supervisory officers and principals. The survey revea 1 ed a much more comprehens i ve vi et·<,l of the attributes of integrated programming than tradition would dictate and resulted in a functional definition tha t was broader than past prac t ices. Based on the information generated by this survey, an instrument ou t 1 in i ng program cr iter i a of was devised. an integrated junior cla~·sroom Th i s measuremen t i nstrumen t , designed for all levels of educators, was named uThe Han~.son I nstrumen t for the Measuremen t of Program Integrat ion in the Jun i or Cl assroom". It refl ected five categories intrinsic to the me thodol ogy of integration: Teacher Behaviour, Student Behaviour, Classroom Layout, Cl as~·r oom Environment and Progr amm i ng. Each category and the items therein were successfully tested in val idi ty and rel iabi 1 i ty checKs. Interestingly, the individual class was found to be the major variable programming in in the measuremen t the j un i or d i vis i on • of The integrated instrument demonstrated potential not onl)' a~· an initial measure of the degree of integrated curriculum, but as a guide to strategies to implement such a methodology.
Resumo:
The cooled infrared filters and dichroic beam splitters manufactured for the Mid-Infrared Instrument are key optical components for the selection and isolation of wavelengths in the study of astrophysical properties of stars, galaxies, and other planetary objects. We describe the spectral design and manufacture of the precision cooled filter coatings for the spectrometer (7 K) and imager (9 K). Details of the design methods used to achieve the spectral requirements, selection of thin film materials, deposition technique, and testing are presented together with the optical layout of the instrument. (C) 2008 Optical Society of America.