978 resultados para Identification problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bridge approach settlement and the formation of the bump is a common problem in Iowa that draws upon considerable resources for maintenance and creates a negative perception in the minds of transportation users. This research study was undertaken to investigate bridge approach problems and develop new concepts for design, construction, and maintenance that will reduce this costly problem. As a result of the research described in this report, the following changes are suggested for implementation on a pilot test basis: • Use porous backfill behind the abutment and/or geocomposite drainage systems to improve drainage capacity and reduce erosion around the abutment. • On a pilot basis, connect the approach slab to the bridge abutment. Change the expansion joint at the bridge to a construction joint of 2 inch. Use a more effective joint sealing system at the CF joint. Change the abutment wall rebar from #5 to #7 for non-integral abutments. • For bridges with soft foundation or embankment soils, implement practices of better compaction, preloading, ground improvement, soil removal and replacement, or soil reinforcement that reduce time-dependent post construction settlements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cannabis use among adolescents and young adults has become a major public health challenge. Several European countries are currently developing short screening instruments to identify 'problematic' forms of cannabis use in general population surveys. One such instrument is the Cannabis Use Disorders Identification Test (CUDIT), a 10-item questionnaire based on the Alcohol Use Disorders Identification Test. Previous research found that some CUDIT items did not perform well psychometrically. In the interests of improving the psychometric properties of the CUDIT, this study replaces the poorly performing items with new items that specifically address cannabis use. Analyses are based on a sub-sample of 558 recent cannabis users from a representative population sample of 5722 individuals (aged 13-32) who were surveyed in the 2007 Swiss Cannabis Monitoring Study. Four new items were added to the original CUDIT. Psychometric properties of all 14 items, as well as the dimensionality of the supplemented CUDIT were then examined using Item Response Theory. Results indicate the unidimensionality of CUDIT and an improvement in its psychometric performance when three original items (usual hours being stoned; injuries; guilt) are replaced by new ones (motives for using cannabis; missing out leisure time activities; difficulties at work/school). However, improvements were limited to cannabis users with a high problem score. For epidemiological purposes, any further revision of CUDIT should therefore include a greater number of 'easier' items.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have performed a detailed study of the zenith angle dependence of the regeneration factor and distributions of events at SNO and SK for different solutions of the solar neutrino problem. In particular, we discuss the oscillatory behavior and the synchronization effect in the distribution for the LMA solution, the parametric peak for the LOW solution, etc. A physical interpretation of the effects is given. We suggest a new binning of events which emphasizes the distinctive features of the zenith angle distributions for the different solutions. We also find the correlations between the integrated day-night asymmetry and the rates of events in different zenith angle bins. The study of these correlations strengthens the identification power of the analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recognition and identification processes for deceased persons. Determining the identity of deceased persons is a routine task performed essentially by police departments and forensic experts. This thesis highlights the processes necessary for the proper and transparent determination of the civil identities of deceased persons. The identity of a person is defined as the establishment of a link between that person ("the source") and information pertaining to the same individual ("identifiers"). Various identity forms could emerge, depending on the nature of the identifiers. There are two distinct types of identity, namely civil identity and biological identity. The paper examines four processes: identification by witnesses (the recognition process) and comparisons of fingerprints, dental data and DNA profiles (the identification processes). During the recognition process, the memory function is examined and helps to clarify circumstances that may give rise to errors. To make the process more rigorous, a body presentation procedure is proposed to investigators. Before examining the other processes, three general concepts specific to forensic science are considered with regard to the identification of a deceased person, namely, matter divisibility (Inman and Rudin), transfer (Locard) and uniqueness (Kirk). These concepts can be applied to the task at hand, although some require a slightly broader scope of application. A cross comparison of common forensic fields and the identification of deceased persons reveals certain differences, including 1 - reverse positioning of the source (i.e. the source is not sought from traces, but rather the identifiers are obtained from the source); 2 - the need for civil identity determination in addition to the individualisation stage; and 3 - a more restricted population (closed set), rather than an open one. For fingerprints, dental and DNA data, intravariability and intervariability are examined, as well as changes in these post mortem (PM) identifiers. Ante-mortem identifiers (AM) are located and AM-PM comparisons made. For DNA, it has been shown that direct identifiers (taken from a person whose civil identity has been alleged) tend to lead to determining civil identity whereas indirect identifiers (obtained from a close relative) direct towards a determination of biological identity. For each process, a Bayesian model is presented which includes sources of uncertainty deemed to be relevant. The results of the different processes combine to structure and summarise an overall outcome and a methodology. The modelling of dental data presents a specific difficulty with respect to intravariability, which in itself is not quantifiable. The concept of "validity" is, therefore, suggested as a possible solution to the problem. Validity uses various parameters that have an acknowledged impact on teeth intravariability. In cases where identifying deceased persons proves to be extremely difficult due to the limited discrimination of certain procedures, the use of a Bayesian approach is of great value in bringing a transparent and synthetic value. RESUME : Titre: Processus de reconnaissance et d'identification de personnes décédées. L'individualisation de personnes décédées est une tâche courante partagée principalement par des services de police, des odontologues et des laboratoires de génétique. L'objectif de cette recherche est de présenter des processus pour déterminer valablement, avec une incertitude maîtrisée, les identités civiles de personnes décédées. La notion d'identité est examinée en premier lieu. L'identité d'une personne est définie comme l'établissement d'un lien entre cette personne et des informations la concernant. Les informations en question sont désignées par le terme d'identifiants. Deux formes distinctes d'identité sont retenues: l'identité civile et l'identité biologique. Quatre processus principaux sont examinés: celui du témoignage et ceux impliquant les comparaisons d'empreintes digitales, de données dentaires et de profils d'ADN. Concernant le processus de reconnaissance, le mode de fonctionnement de la mémoire est examiné, démarche qui permet de désigner les paramètres pouvant conduire à des erreurs. Dans le but d'apporter un cadre rigoureux à ce processus, une procédure de présentation d'un corps est proposée à l'intention des enquêteurs. Avant d'entreprendre l'examen des autres processus, les concepts généraux propres aux domaines forensiques sont examinés sous l'angle particulier de l'identification de personnes décédées: la divisibilité de la matière (Inman et Rudin), le transfert (Locard) et l'unicité (Kirk). Il est constaté que ces concepts peuvent être appliqués, certains nécessitant toutefois un léger élargissement de leurs principes. Une comparaison croisée entre les domaines forensiques habituels et l'identification de personnes décédées montre des différences telles qu'un positionnement inversé de la source (la source n'est plus à rechercher en partant de traces, mais ce sont des identifiants qui sont recherchés en partant de la source), la nécessité de devoir déterminer une identité civile en plus de procéder à une individualisation ou encore une population d'intérêt limitée plutôt qu'ouverte. Pour les empreintes digitales, les dents et l'ADN, l'intra puis l'inter-variabilité sont examinées, de même que leurs modifications post-mortem (PM), la localisation des identifiants ante-mortem (AM) et les comparaisons AM-PM. Pour l'ADN, il est démontré que les identifiants directs (provenant de la personne dont l'identité civile est supposée) tendent à déterminer une identité civile alors que les identifiants indirects (provenant d'un proche parent) tendent à déterminer une identité biologique. Puis une synthèse des résultats provenant des différents processus est réalisée grâce à des modélisations bayesiennes. Pour chaque processus, une modélisation est présentée, modélisation intégrant les paramètres reconnus comme pertinents. À ce stade, une difficulté apparaît: celle de quantifier l'intra-variabilité dentaire pour laquelle il n'existe pas de règle précise. La solution préconisée est celle d'intégrer un concept de validité qui intègre divers paramètres ayant un impact connu sur l'intra-variabilité. La possibilité de formuler une valeur de synthèse par l'approche bayesienne s'avère d'une aide précieuse dans des cas très difficiles pour lesquels chacun des processus est limité en termes de potentiel discriminant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trenchless technologies are methods used for the construction and rehabilitation of underground utility pipes. These methods are growing increasingly popular due to their versatility and their potential to lower project costs. However, the use of trenchless technologies in Iowa and their effects on surrounding soil and nearby structures has not been adequately documented. Surveys of and interviews with professionals working in trenchless-related industries in Iowa were conducted, and the results were analyzed and compared to survey results from the United States as a whole. The surveys focused on method familiarity, pavement distress observed, reliability of trenchless methods, and future improvements. Results indicate that the frequency of pavement distress or other trenchless-related issues are an ongoing problem in the industry. Inadequate soil information and quality control/quality assurance (QC/QA) are partially to blame. Fieldwork involving the observation of trenchless construction projects was undertaken with the purpose of documenting current practices and applications of trenchless technology in the United States and Iowa. Field tests were performed in which push-in pressure cells were used to measure the soil stresses induced by trenchless construction methods. A program of laboratory soil testing was carried out in conjunction with the field testing. Soil testing showed that the installations were made in sandy clay or well-graded sand with silt and gravel. Pipes were installed primarily using horizontal directional drilling with pipe diameters from 3 to 12 inches. Pressure cell monitoring was conducted during the following construction phases: pilot bore, pre-reaming, and combined pipe pulling and reaming. The greatest increase in lateral earth pressure was 5.6 psi and was detected 2.1 feet from the centerline of the bore during a pilot hole operation in sandy lean clay. Measurements from 1.0 to 2.5 psi were common. Comparisons were made between field measurements and analytical and finite element calculation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Excessive drinking is a major problem in Western countries. AUDIT (Alcohol Use Disorders Identification Test) is a 10-item questionnaire developed as a transcultural screening tool to detect excessive alcohol consumption and dependence in primary health care settings. OBJECTIVES: The aim of the study is to validate a French version of the Alcohol Use Disorders Identification Test (AUDIT). METHODS: We conducted a validation cross-sectional study in three French-speaking areas (Paris, Geneva and Lausanne). We examined psychometric properties of AUDIT as its internal consistency, and its capacity to correctly diagnose alcohol abuse or dependence as defined by DSM-IV and to detect hazardous drinking (defined as alcohol intake >30 g pure ethanol per day for men and >20 g of pure ethanol per day for women). We calculated sensitivity, specificity, positive and negative predictive values and Receiver Operator Characteristic curves. Finally, we compared the ability of AUDIT to accurately detect "alcohol abuse/dependence" with that of CAGE and MAST. RESULTS: 1207 patients presenting to outpatient clinics (Switzerland, n = 580) or general practitioners' (France, n = 627) successively completed CAGE, MAST and AUDIT self-administered questionnaires, and were independently interviewed by a trained addiction specialist. AUDIT showed a good capacity to discriminate dependent patients (with AUDIT > or =13 for males, sensitivity 70.1%, specificity 95.2%, PPV 85.7%, NPV 94.7% and for females sensitivity 94.7%, specificity 98.2%, PPV 100%, NPV 99.8%); and hazardous drinkers (with AUDIT > or =7, for males sensitivity 83.5%, specificity 79.9%, PPV 55.0%, NPV 82.7% and with AUDIT > or =6 for females, sensitivity 81.2%, specificity 93.7%, PPV 64.0%, NPV 72.0%). AUDIT gives better results than MAST and CAGE for detecting "Alcohol abuse/dependence" as showed on the comparative ROC curves. CONCLUSIONS: The AUDIT questionnaire remains a good screening instrument for French-speaking primary care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This piece of work which is Identification of Research Portfolio for Development of Filtration Equipment aims at presenting a novel approach to identify promising research topics in the field of design and development of filtration equipment and processes. The projected approach consists of identifying technological problems often encountered in filtration processes. The sources of information for the problem retrieval were patent documents and scientific papers that discussed filtration equipments and processes. The problem identification method adopted in this work focussed on the semantic nature of a sentence in order to generate series of subject-action-object structures. This was achieved with software called Knowledgist. List of problems often encountered in filtration processes that have been mentioned in patent documents and scientific papers were generated. These problems were carefully studied and categorized. Suggestions were made on the various classes of these problems that need further investigation in order to propose a research portfolio. The uses and importance of other methods of information retrieval were also highlighted in this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objectives of this research work “Identification of the Emerging Issues in Recycled Fiber processing” are discovering of emerging research issues and presenting of new approaches to identify promising research themes in recovered paper application and production. The projected approach consists of identifying technological problems often encountered in wastepaper preparation processes and also improving the quality of recovered paper and increasing its proportion in the composition of paper and board. The source of information for the problem retrieval is scientific publications in which waste paper application and production were discussed. The study has exploited several research methods to understand the changes related to utilization of recovered paper. The all assembled data was carefully studied and categorized by applying software called RefViz and CiteSpace. Suggestions were made on the various classes of these problems that need further investigation in order to propose an emerging research trends in recovered paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the use of the conjugate gradient method of function estimation for the simultaneous identification of two unknown boundary heat fluxes in parallel plate channels. The fluid flow is assumed to be laminar and hydrodynamically developed. Temperature measurements taken inside the channel are used in the inverse analysis. The accuracy of the present solution approach is examined by using simulated measurements containing random errors, for strict cases involving functional forms with discontinuities and sharp-corners for the unknown functions. Three different types of inverse problems are addressed in the paper, involving the estimation of: (i) Spatially dependent heat fluxes; (ii) Time-dependent heat fluxes; and (iii) Time and spatially dependent heat fluxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study probed for an answer to the question, "How do you identify as early as possible those students who are at risk of failing or dropping out of college so that intervention can take place?" by field testing two diagnostic instruments with a group of first semester Seneca College Computer Studies students. In some respects, the research approach was such as might be taken in a pilot study. Because of the complexity of the issue, this study did not attempt to go beyond discovery, understanding and description. Although some inferences may be drawn from the results of the study, no attempt was made to establish any causal relationship between or among the factors or variables represented here. Both quantitative and qualitative data were gathered during. four resea~ch phases: background, early identification, intervention, and evaluation. To gain a better understanding of the problem of student attrition within the School of Computer Studies at Seneca College, several methods were used, including retrospective analysis of enrollment statistics, faculty and student interviews and questionnaires, and tracking of the sample population. The significance of the problem was confirmed by the results of this study. The findings further confirmed the importance of the role of faculty in student retention and support the need to improve the quality of teacher/student interaction. As well, the need __f or ~~ills as~e:ss_~ent foll,,-~ed }JY supportiv e_c_ounsell~_I'l9_ ~~d_ __ placement was supported by the findings from this study. strategies for reducing student attrition were identified by faculty and students. As part of this study, a project referred to as "A Student Alert project" (ASAP) was undertaken at the School of Computer Studies at Seneca College. Two commercial diagnostic instruments, the Noel/Levitz College Student Inventory (CSI) and the Learning and Study Strategies Inventory (LASSI), provided quantitative data which were subsequently analyzed in Phase 4 in order to assess their usefulness as early identification tools. The findings show some support for using these instruments in a two-stage approach to early identification and intervention: the CSI as an early identification instrument and the LASSI as a counselling tool for those students who have been identified as being at risk. The findings from the preliminary attempts at intervention confirmed the need for a structured student advisement program where faculty are selected, trained, and recognized for their advisor role. Based on the finding that very few students acted on the diagnostic results and recommendations, the need for institutional intervention by way of intrusive measures was confirmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study probed for an answer to the question, "How do you identify as early as possible those students who are at risk of failing or dropping out of college so that intervention can take place?" by field testing two diagnostic instruments with a group of first semester Seneca College Computer ,Studies students. In some respects, the research approach was such as might be taken in a pilot study_ Because of the complexity of the issue, this study did not attempt to go beyond discovery, understanding and description. Although some inferences may be drawn from the results of the study, no attempt was made to establish any causal relationship between or among the factors or variables represented here. Both quantitative and qualitative data were gathered during four resea~ch phases: background, early identification, intervention, and evaluation. To gain a better understanding of the problem of student attrition within the School of Computer Studies at Seneca College, several methods were used, including retrospective analysis of enrollment statistics, faculty and student interviews and questionnaires, and tracking of the sample population. The significance of the problem was confirmed by the results of this study. The findings further confirmed the importance of the role of faculty in student retention and support the need to improve the quality of teacher/student interaction. As well, the need for skills assessmen~-followed by supportive counselling, and placement was supported by the findings from this study. strategies for reducing student attrition were identified by faculty and students. As part of this study, a project referred to as "A Student Alert Project" (ASAP) was undertaken at the School of Computer Studies at Seneca college. Two commercial diagnostic instruments, the Noel/Levitz College Student Inventory (CSI) and the Learning and Study Strategies Inventory (LASSI), provided quantitative data which were subsequently analyzed in Phase 4 in order to assess their usefulness as early identification tools. The findings show some support for using these instruments in a two-stage approach to early identification and intervention: the CSI as an early identification instrument and the LASSI as a counselling tool for those students who have been identified as being at risk. The findings from the preliminary attempts at intervention confirmed the need for a structured student advisement program where faculty are selected, trained, and recognized for their advisor role. Based on the finding that very few students acted on the diagnostic results and recommendations, the need for institutional intervention by way of intrusive measures was confirmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionnellement, les applications orientées objets légataires intègrent différents aspects fonctionnels. Ces aspects peuvent être dispersés partout dans le code. Il existe différents types d’aspects : • des aspects qui représentent des fonctionnalités métiers ; • des aspects qui répondent à des exigences non fonctionnelles ou à d’autres considérations de conception comme la robustesse, la distribution, la sécurité, etc. Généralement, le code qui représente ces aspects chevauche plusieurs hiérarchies de classes. Plusieurs chercheurs se sont intéressés à la problématique de la modularisation de ces aspects dans le code : programmation orientée sujets, programmation orientée aspects et programmation orientée vues. Toutes ces méthodes proposent des techniques et des outils pour concevoir des applications orientées objets sous forme de composition de fragments de code qui répondent à différents aspects. La séparation des aspects dans le code a des avantages au niveau de la réutilisation et de la maintenance. Ainsi, il est important d’identifier et de localiser ces aspects dans du code légataire orienté objets. Nous nous intéressons particulièrement aux aspects fonctionnels. En supposant que le code qui répond à un aspect fonctionnel ou fonctionnalité exhibe une certaine cohésion fonctionnelle (dépendances entre les éléments), nous proposons d’identifier de telles fonctionnalités à partir du code. L’idée est d’identifier, en l’absence des paradigmes de la programmation par aspects, les techniques qui permettent l’implémentation des différents aspects fonctionnels dans un code objet. Notre approche consiste à : • identifier les techniques utilisées par les développeurs pour intégrer une fonctionnalité en l’absence des techniques orientées aspects • caractériser l’empreinte de ces techniques sur le code • et développer des outils pour identifier ces empreintes. Ainsi, nous présentons deux approches pour l’identification des fonctionnalités existantes dans du code orienté objets. La première identifie différents patrons de conception qui permettent l’intégration de ces fonctionnalités dans le code. La deuxième utilise l’analyse formelle de concepts pour identifier les fonctionnalités récurrentes dans le code. Nous expérimentons nos deux approches sur des systèmes libres orientés objets pour identifier les différentes fonctionnalités dans le code. Les résultats obtenus montrent l’efficacité de nos approches pour identifier les différentes fonctionnalités dans du code légataire orienté objets et permettent de suggérer des cas de refactorisation.