874 resultados para Subjective and objective hearing protection evaluation method
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
A gravimetric method was evaluated as a simple, sensitive, reproducible, low-cost alternative to quantify the extent of brain infarct after occlusion of the medial cerebral artery in rats. In ether-anesthetized rats, the left medial cerebral artery was occluded for 1, 1.5 or 2 h by inserting a 4-0 nylon monofilament suture into the internal carotid artery. Twenty-four hours later, the brains were processed for histochemical triphenyltetrazolium chloride (TTC) staining and quantitation of the schemic infarct. In each TTC-stained brain section, the ischemic tissue was dissected with a scalpel and fixed in 10% formalin at 0ºC until its total mass could be estimated. The mass (mg) of the ischemic tissue was weighed on an analytical balance and compared to its volume (mm³), estimated either by plethysmometry using platinum electrodes or by computer-assisted image analysis. Infarct size as measured by the weighing method (mg), and reported as a percent (%) of the affected (left) hemisphere, correlated closely with volume (mm³, also reported as %) estimated by computerized image analysis (r = 0.88; P < 0.001; N = 10) or by plethysmography (r = 0.97-0.98; P < 0.0001; N = 41). This degree of correlation was maintained between different experimenters. The method was also sensitive for detecting the effect of different ischemia durations on infarct size (P < 0.005; N = 23), and the effect of drug treatments in reducing the extent of brain damage (P < 0.005; N = 24). The data suggest that, in addition to being simple and low cost, the weighing method is a reliable alternative for quantifying brain infarct in animal models of stroke.
Resumo:
The objective of the present study was to translate the Kidney Disease Quality of Life - Short Form (KDQOL-SF™1.3) questionnaire into Portuguese to adapt it culturally and validate it for the Brazilian population. The KDQOL-SF was translated into Portuguese and back-translated twice into English. Patient difficulties in understanding the questionnaire were evaluated by a panel of experts and solved. Measurement properties such as reliability and validity were determined by applying the questionnaire to 94 end-stage renal disease patients on chronic dialysis. The Nottingham Health Profile Questionnaire, the Karnofsky Performance Scale and the Kidney Disease Questionnaire were administered to test validity. Some activities included in the original instrument were considered to be incompatible with the activities usually performed by the Brazilian population and were replaced. The mean scores for the 19 components of the KDQOL-SF questionnaire in Portuguese ranged from 22 to 91. The components "Social support" and "Dialysis staff encouragement" had the highest scores (86.7 and 90.8, respectively). The test-retest reliability and the inter-observer reliability of the instrument were evaluated by the intraclass correlation coefficient. The coefficients for both reliability tests were statistically significant for all scales of the KDQOL-SF (P < 0.001), ranging from 0.492 to 0.936 for test-retest reliability and from 0.337 to 0.994 for inter-observer reliability. The Cronbach's alpha coefficient was higher than 0.80 for most of components. The Portuguese version of the KDQOL-SF questionnaire proved to be valid and reliable for the evaluation of quality of life of Brazilian patients with end-stage renal disease on chronic dialysis.
Resumo:
The total number of CD34+ cells is the most relevant clinical parameter when selecting human umbilical cord blood (HUCB) for transplantation. The objective of the present study was to compare the two most commonly used CD34+ cell quantification methods (ISHAGE protocol and ProCount™ - BD) and analyze the CD34+ bright cells whose 7-amino actinomycin D (7AAD) analysis suggests are apoptotic or dead cells. Twenty-six HUCB samples obtained at the Placental Blood Program of New York Blood Center were evaluated. The absolute numbers of CD34+ cells evaluated by the ISHAGE (with exclusion of 7AAD+ cells) and ProCount™ (with exclusion of CD34+ bright cells) were determined. Using the ISHAGE protocol we found 35.6 ± 19.4 CD34+ cells/µL and with the ProCount™ method we found 36.6 ± 23.2 CD34+ cells/µL. With the ProCount™ method, CD34+ bright cell counts were 9.3 ± 8.2 cells/µL. CD34+ bright and regular cells were individually analyzed by the ISHAGE protocol. Only about 1.8% of the bright CD34+ cells are alive, whereas a small part (19.0%) is undergoing apoptosis and most of them (79.2%) are dead cells. Our study showed that the two methods produced similar results and that 7AAD is important to exclude CD34 bright cells. These results will be of value to assist in the correct counting of CD34+ cells and to choose the best HUCB unit for transplantation, i.e., the unit with the greatest number of potentially viable stem cells for the reconstitution of bone marrow. This increases the likelihood of success of the transplant and, therefore, the survival of the patient.
Resumo:
Several methods are used to estimate anaerobic threshold (AT) during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard), to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR) and carbon dioxide output (VCO2) data. Thirteen young (24 ± 2.63 years old) and 16 postmenopausal (57 ± 4.79 years old) healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W), HR (bpm), oxygen uptake (VO2; mL kg-1 min-1), VO2 (mL/min), VCO2 (mL/min), and minute ventilation (VE; L/min) data observed at the AT level were similar for both methods and groups studied (P > 0.05). The VO2 (mL kg-1 min-1) data showed significant correlation (P < 0.05) between the gold standard method and the mathematical model when applied to HR (r s = 0.75) and VCO2 (r s = 0.78) data for the subjects as a whole (N = 29). The proposed mathematical method for the detection of changes in response patterns of VCO2 and HR was adequate and promising for AT detection in young and middle-aged women, representing a semi-automatic, non-invasive and objective AT measurement.
Resumo:
Uuden tietojärjestelmän hankinta on paitsi tekninen myös sosiaalinen muutos. Tieteelliset teoriat sekä tutkimukset osoittavat, että uuden tietojärjestelmän onnistunut käyttöönotto edellyttää sosioteknistä systeemiajattelua muutoksen läpivientiin. Tämän tutkimuksen tavoitteena oli osallistua yhteiskunnalliseen keskusteluun muutosjohtamisen tärkeydestä otettaessa käyttöön uutta teknologiaa sekä tutkia Case-yrityksen ERP-projektin muutosjohtamisen onnistumista. Tutkimuksen konteksti koostuu sosioteknisestä muutoksesta, jossa muutosjohtaminen nousee keskiöön. Tutkimuksessa kartoitettiin uuden ERP-järjestelmän käyttöönoton onnistumisen kannalta keskeiset muutosjohtamisen elementit: muutostarve, projektitiimi, visio ja tavoitteet, viestintä, koulutus, osallistaminen, sitouttaminen, johdon ja projektiryhmän tuki sekä muutoksen arviointi ja vakiinnuttaminen. Tutkimuksen empiirisessä osassa arvioitiin Case-yrityksen ERP-projektin muutosjohtamista henkilöstön näkökulmasta. Tutkimus toteutettiin laadullisena tapaustutkimuksena, jossa pääasiallisena aineistonkeruumenetelmänä käytettiin sähköistä puolistrukturoitua kyselylomaketta. Tulosten analysoinnin perusteella Case-yrityksen ERP-järjestelmän käyttöönotto sujui tyydyttävästi.
Resumo:
The subject of the thesis is automatic sentence compression with machine learning, so that the compressed sentences remain both grammatical and retain their essential meaning. There are multiple possible uses for the compression of natural language sentences. In this thesis the focus is generation of television program subtitles, which often are compressed version of the original script of the program. The main part of the thesis consists of machine learning experiments for automatic sentence compression using different approaches to the problem. The machine learning methods used for this work are linear-chain conditional random fields and support vector machines. Also we take a look which automatic text analysis methods provide useful features for the task. The data used for machine learning is supplied by Lingsoft Inc. and consists of subtitles in both compressed an uncompressed form. The models are compared to a baseline system and comparisons are made both automatically and also using human evaluation, because of the potentially subjective nature of the output. The best result is achieved using a CRF - sequence classification using a rich feature set. All text analysis methods help classification and most useful method is morphological analysis. Tutkielman aihe on suomenkielisten lauseiden automaattinen tiivistäminen koneellisesti, niin että lyhennetyt lauseet säilyttävät olennaisen informaationsa ja pysyvät kieliopillisina. Luonnollisen kielen lauseiden tiivistämiselle on monta käyttötarkoitusta, mutta tässä tutkielmassa aihetta lähestytään television ohjelmien tekstittämisen kautta, johon käytännössä kuuluu alkuperäisen tekstin lyhentäminen televisioruudulle paremmin sopivaksi. Tutkielmassa kokeillaan erilaisia koneoppimismenetelmiä tekstin automaatiseen lyhentämiseen ja tarkastellaan miten hyvin erilaiset luonnollisen kielen analyysimenetelmät tuottavat informaatiota, joka auttaa näitä menetelmiä lyhentämään lauseita. Lisäksi tarkastellaan minkälainen lähestymistapa tuottaa parhaan lopputuloksen. Käytetyt koneoppimismenetelmät ovat tukivektorikone ja lineaarisen sekvenssin mallinen CRF. Koneoppimisen tukena käytetään tekstityksiä niiden eri käsittelyvaiheissa, jotka on saatu Lingsoft OY:ltä. Luotuja malleja vertaillaan Lopulta mallien lopputuloksia evaluoidaan automaattisesti ja koska teksti lopputuksena on jossain määrin subjektiivinen myös ihmisarviointiin perustuen. Vertailukohtana toimii kirjallisuudesta poimittu menetelmä. Tutkielman tuloksena paras lopputulos saadaan aikaan käyttäen CRF sekvenssi-luokittelijaa laajalla piirrejoukolla. Kaikki kokeillut teksin analyysimenetelmät auttavat luokittelussa, joista tärkeimmän panoksen antaa morfologinen analyysi.
Resumo:
This work had as objective the development of gluten-free breads and muffins using rice flour and maize and cassava starches. From seven samples resulting from a Simplex-Centroid design, the sensory and instrumental analyses of specific volume, elasticity, and firmness were performed. For the sensory analysis, the optimized formulation contained 50% of rice flour and 50% of cassava starch, and for the instrumental evaluation, the optimal simultaneous point for the three conducted analyses were 20% of rice flour, 30% of cassava starch, and 50% of maize starch. A comparative analysis of specific volume, elasticity, firmness, and triangular test was performed with pre-baked, baked, and frozen bread. Physicochemical, nutritional, and microbiological analyses were performed for both bread and muffin according to the Brazilian legislation.
Resumo:
The aim of this work was to calibrate the material properties including strength and strain values for different material zones of ultra-high strength steel (UHSS) welded joints under monotonic static loading. The UHSS is heat sensitive and softens by heat due to welding, the affected zone is heat affected zone (HAZ). In this regard, cylindrical specimens were cut out from welded joints of Strenx® 960 MC and Strenx® Tube 960 MH, were examined by tensile test. The hardness values of specimens’ cross section were measured. Using correlations between hardness and strength, initial material properties were obtained. The same size specimen with different zones of material same as real specimen were created and defined in finite element method (FEM) software with commercial brand Abaqus 6.14-1. The loading and boundary conditions were defined considering tensile test values. Using initial material properties made of hardness-strength correlations (true stress-strain values) as Abaqus main input, FEM is utilized to simulate the tensile test process. By comparing FEM Abaqus results with measured results of tensile test, initial material properties will be revised and reused as software input to be fully calibrated in such a way that FEM results and tensile test results deviate minimum. Two type of different S960 were used including 960 MC plates, and structural hollow section 960 MH X-joint. The joint is welded by BöhlerTM X96 filler material. In welded joints, typically the following zones appear: Weld (WEL), Heat affected zone (HAZ) coarse grained (HCG) and fine grained (HFG), annealed zone, and base material (BaM). Results showed that: The HAZ zone is softened due to heat input while welding. For all the specimens, the softened zone’s strength is decreased and makes it a weakest zone where fracture happens while loading. Stress concentration of a notched specimen can represent the properties of notched zone. The load-displacement diagram from FEM modeling matches with the experiments by the calibrated material properties by compromising two correlations of hardness and strength.
Resumo:
This paper aims at reconciling the evidence that sophisticated valuation models are increasingly used by companies in their investment appraisal with the literature of bounded rationality, according to which objective optimization is impracticable in the real world because it would demand an immense level of sophistication of the analytical and computational processes of human beings. We show how normative valuation models should rather be viewed as forms of reality representation, frameworks according to which the real world is perceived, fragmented for a better understanding, and recomposed, providing an orderly method for undertaking a task as complex as the investment decision.
Resumo:
This qualitative study is an exploration of transformation theory, the Western tradition, and a critical evaluation of a graduate studies class at a university. It is an exploration of assumptions that are embedded in experience, that influence the experience and provide meaning about the experience. An attempt has been made to identify assumptions that are embedded in Western experience and connect them with assumptions that shape the graduate class experience. The focus is on assumptions that facilitate and impede large group discussions. Jungian psychology of personality type and archetype and developmental psychology is used to analyze the group experience. The pragmatic problem solving model, developed by Knoop, is used to guide thinking about the Western tradition. It is used to guide the analysis, synthesis and writing of the experience of the graduate studies class members. A search through Western history, philosophy. and science revealed assumptions about the nature of truth, reality, and the self. Assumptions embedded in Western thinking about the subject-object relationship, unity and diversity are made explicit. An attempt is made to identify Western tradition assumptions underlying transformation theory. The critical evaluation of the graduate studies class experience focuses upon issues associated with group process, self-directed learning, the educator-learner transaction and the definition of adult education. The advantages of making implicit assumptions explicit is explored.
Resumo:
Wine produced using an appassimento-type process represents a new and exciting innovation for the Ontario wine industry. This process involves drying grapes that have already been picked from the vine, which increases the sugar content due to dehydration and induces a variety of changes both within and on the surface of the grapes. Increasing sugar contents in musts subject wine yeast to conditions of high osmolarity during alcoholic fermentations. Under these conditions, yeast growth can be inhibited, target alcohol levels may not be attained and metabolic by-products of the hyperosmotic stress response, including glycerol and acetic acid, may impact wine composition. The further metabolism of acetic acid to acetylCoA by yeast facilitates the synthesis of ethyl acetate, a volatile compound that can also impact wine quality if present in sufficiently high concentrations. The first objective of this project was to understand the effect of yeast strain and sugar concentration on fermentation kinetics and metabolite formation, notably acetic acid and ethyl acetate, during fermentation in appassimento-type must. Our working hypotheses were that (1) the natural isolate Saccharomyces bayanus would produce less acetic acid and ethyl acetate compared to Saccharomyces cerevisiae strain EC-1118 fermenting the high and low sugar juices; (2) the wine produced using the appassimento process would contain higher levels of acetic acid and lower levels of ethyl acetate compared to table wine; (3) and the strains would be similar in the kinetic behavior of their fermentation performances in the high sugar must. This study determined that the S. bayanus strain produced significantly less acetic acid and ethyl acetate in the appassimento wine and table wine fermentations. Differences in acetic acid and ethyl acetate production were also observed within strains fermenting the two sugar conditions. Acetic acid production was higher in table wine fermented by S. bayanus as no acetic acid was produced in appassimento-style wine, and 1.4-times higher in appassimento wine fermented by EC-1118 over that found in table wine. Ethyl acetate production was 27.6-times higher in table wine fermented by S. bayanus, and 5.2-times higher by EC-1118, compared to that in appassimento wine. Sugar utilization and ethanol production were comparable between strains as no significant differences were determined. The second objective of this project was to bring a method in-house for measuring the concentration of pyridine nucleotides, NAD+, NADP+, NADH and NADPH, in yeast cytosolic extract. Development of this method is of applicative interest for our lab group as it will enable the redox balance of the NAD+/ NADH and NADP+/ NADPH systems to be assessed during high sugar fermentations to determine their respective roles as metabolic triggers for acetic acid production. Two methods were evaluated in this study including a UV-endpoint method using a set of enzymatic assay protocols outlined in Bergmeyer (1974) and a colorimetric enzyme cycling method developed by Sigma-Aldrich® using commercial kits. The former was determined to be limited by its low sensitivity following application to yeast extract and subsequent coenzyme analyses, while the latter was shown to exhibit greater sensitivity. The results obtained from the kits indicated high linearity, accuracy and precision of the analytical method for measuring NADH and NADPH, and that it was sensitive enough to measure the low coenzyme concentrations present in yeast extract samples. NADtotal and NADPtotal concentrations were determined to be above the lower limit of quantification and within the range of the respective calibration curves, making this method suitable for our research purposes.
Resumo:
"Mémoire présenté à la Faculté des études supérieures en vue de l'obtention du grade de Maîtrise en droit, option droit commercial". Ce mémoire a été accepté à l'unanimité et classé parmi les 10% des mémoires de la discipline.
Resumo:
Les infections à Salmonella Enteritidis chez les humains sont associées à la consommation d’œufs ou d’ovoproduits contaminés. La vaccination est un outil utilisé pour diminuer les risques d’infection à SE chez la volaille, mais avec des résultats variables. Au Canada deux bactérines, MBL SE4C et Layermune, sont couramment utilisées pour lutter contre SE. Cependant, leur efficacité n’a pas été complètement déterminée chez les poules pondeuses plus âgées. Par ailleurs, la capacité de ces vaccins à prévenir la transmission verticale et horizontale n’a pas encore été étudiée. L’objectif principal de cette étude était d’évaluer l’effet des deux bactérines sur la réponse immunitaire chez les poules pondeuses, de vérifier la protection conférée par ces vaccins contre l’infection expérimentale à SE, et d’identifier des protéines immunogènes afin de développer un vaccin sous-unitaire. Les oiseaux ont été vaccinés avec deux protocoles d’immunisation en cours d’élevage (soit à 12 et 18, ou à 16 semaines d’âge). Le groupe contrôle a été injecté avec la solution saline. Les oiseaux ont été inoculés per os avec 2 x 109 CFU de la souche SE lysotype 4 à 55 ou à 65 semaines d’âge. Les anticorps (IgG et IgA) ont été mesurés à différents temps avec un ELISA maison en utilisant l’antigène entier de SE. La phagocytose, flambée oxydative, les populations des splénocytes B et T ont été analysées en utilisant la cytométrie en flux. Les signes cliniques, l’excrétion fécale, la contamination des jaunes d’œufs et l’invasion des salmonelles dans les organes ont été étudiés pour évaluer l’efficacité de protection. La transmission horizontale a aussi été étudiée en évaluant l’infection à SE chez les oiseaux mis en contact avec les oiseaux inoculés. Les protéines immunogènes ont été identifiées par SDS-PAGE et Western blot à l’aide d’antisérums prélevés suite à la vaccination et/ou à l’infection expérimentale/naturelle, puis caractérisées par la spectrométrie de masse. Le protocole de vaccination avec deux immunisations a généré un niveau élevé de séroconversion à partir de 3 jusqu’à 32-34 semaines post-vaccination par rapport à celui avec une seule immunisation (p < 0.02), mais il n’y avait plus de différence entre les groupes à 54 et 64 semaines d’âge. Il n’y a pas eu de corrélation entre les niveaux d’IgG et les taux d’isolement des salmonelles dans les organes et des jaunes d’œuf. La production des IgA n’a été observée que chez les oiseaux vaccinés avec 2 injections de MBL SE4C (p ≤ 0.04). Après l’infection expérimentale, la production des IgA a été significativement plus élevée aux jours 1 et 7 p.i dans l’oviducte des oiseaux vaccinés (sauf pour le groupe vacciné avec 2 injections de Layermune) par comparaison avec le groupe contrôle (p ≤ 0.03). Seule la bactérine MBL SE4C a eu un effet protecteur contre la contamination des jaunes d’œuf chez les oiseaux infectés. Ce vaccin réduit partiellement en utilisant deux immunisations, le taux d’excrétion fécale des salmonelles chez les oiseaux inoculés et les oiseaux horizontalement infectés (p ≤ 0.02). Cinq des protéines identifiées par la spectrométrie de masse sont considérées comme des protéines potentiellement candidates pour une étude plus approfondie de leur immonogénicité: Lipoamide dehydrogenase, Enolase (2-phosphoglycerate dehydratase) (2-phospho-D-glycerate hydro-lyase), Elongation factor Tu (EF-Tu), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) et DNA protection during starvation protein. En général, les bactérines ont induit une immunité humorale (IgG et IgA) chez les poules pondeuses. Cette réponse immunitaire a protégé partiellement les oiseaux quant à l’élimination des salmonelles, la contamination des jaunes d’œuf, ainsi que la transmission horizontale. Dans cette étude, la bactérine MBL SE4C (avec deux immunisations) s’est montrée plus efficace pour protéger les oiseaux que la bactérine Layermune. Nos résultats apportent des informations objectives et complémentaires sur le potentiel de deux bactérines pour lutter contre SE chez les poules pondeuses. Étant donné la protection partielle obtenue en utilisant ces vaccins, l’identification des antigènes immunogènes a permis de sélectionner des protéines spécifiques pour l’élaboration éventuelle d’un vaccin plus efficace contre SE chez les volailles.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.