935 resultados para 340402 Econometric and Statistical Methods
Resumo:
The purpose of this study was to examine the relationship between the structure of jobs and burnout, and to assess to what extent, if any this relationship was moderated by individual coping methods. This study was supported by the Karasek's (1998) Job Demand-Control-Support theory of work stress as well as Maslach and Leiter's (1993) theory of burnout. Coping was examined as a moderator based on the conceptualization of Lazarus and Folkman (1984). ^ Two overall overarching questions framed this study: (a) what is the relationship between job structure, as operationalized by job title, and burnout across different occupations in support services in a large municipal school district? and (b) To what extent do individual differences in coping methods moderate this relationship? ^ This study was a cross-sectional study of county public school bus drivers, bus aides, mechanics, and clerical workers (N = 253) at three bus depot locations within the same district using validated survey instruments for data collection. Hypotheses were tested using simultaneous regression analyses. ^ Findings indicated that there were statistically significant and relevant relationships among the variables of interest; job demands, job control, burnout, and ways of coping. There was a relationship between job title and physical job demands. There was no evidence to support a relationship between job title and psychological demands. Furthermore, there was a relationship between physical demands, emotional exhaustion and personal accomplishment; key indicators of burnout. ^ Results showed significant correlations between individual ways of coping as a moderator between job structure, operationalized by job title, and individual employee burnout adding empirical evidence to the occupational stress literature. Based on the findings, there are implications for theory, research, and practice. For theory and research, the findings suggest the importance of incorporating transactional models in the study of occupational stress. In the area of practice, the findings highlight the importance of enriching jobs, increasing job control, and providing individual-level training related to stress reduction.^
Resumo:
This study examined the occurrence of pharmaceuticals and personal care products (PPCP's) in surface waters of Florida and their potential to be use as indicators of wastewater contamination. Previous studies have shown that elimination of pharmaceuticals in municipal sewage treatment plants is often incomplete. Aquatic ecosystems are under increased stress from human activities, particularly in heavily populated areas. The purpose of this study was to find an ideal indicator for wastewater. The applied methods, GC/MS and LC/MS, were suitable for the determination of pharmaceuticals and personal care products in aqueous environmental samples to the lower parts-per-trillion (ng/L) level. As a result of this study a snapshot view of the occurrence of pharmaceuticals and personal care products in south Florida was produced. PPCP's were commonly detected in coastal environments of South Florida at relatively low concentrations. In general, PPCP's were higher inside the canals and contained bodies of water than in open water systems. Caffeine was successfully used to describe impacted versus pristine locations. However, no particular correlation was observed among caffeine and other traditional water quality parameters.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
Marine- and terrestrial-derived biomarkers (alkenones, brassicasterol, dinosterol, and long-chain n-alkanes), as well as carbonate, biogenic opal, and ice-rafted debris (IRD), were measured in two sediment cores in the Sea of Okhotsk, which is located in the northwestern Pacific rim and characterized by high primary productivity. Down-core profiles of phytoplankton markers suggest that primary productivity abruptly increased during the global Meltwater Pulse events 1A (about 14 ka) and 1B (about 11 ka) and stayed high in the Holocene. Spatial and temporal distributions of the phytoplankton productivity were found to be consistent with changes in the reconstructed sea ice distribution on the basis of the IRD. This demonstrates that the progress and retreat of sea ice regulated primary productivity in the Sea of Okhotsk with minimum productivity during the glacial period. The mass accumulation rates of alkenones, CaCO3, and biogenic opal indicate that the dominant phytoplankton species during deglaciation was the coccolithophorid, Emiliania huxleyi, which was replaced by diatoms in the late Holocene. Such a phytoplankton succession was probably caused by an increase in silicate supply to the euphotic layer, possibly associated with a change in surface hydrography and/or linked to enhanced upwelling of North Pacific Deep Water.
Resumo:
With the growing pressure of eutrophication in tropical regions, the Mauritian shelf provides a natural situation to understand the variability in mesotrophic assemblages. Site-specific dynamics occur throughout the 1200 m depth gradient. The shallow assemblages divide into three types of warm-water mesotrophic foraminiferal assemblages, which is not only a consequence of high primary productivity restricting light to the benthos but due to low pore water oxygenation, shelf geomorphology, and sediment partitioning. In the intermediate depth (approx. 500 m), the increase in foraminiferal diversity is due to the cold-water coral habitat providing a greater range of micro niches. Planktonic species characterise the lower bathyal zone, which emphasizes the reduced benthic carbonate production at depth. Although, due to the strong hydrodynamics within the Golf, planktonic species occur in notable abundances through out the whole depth gradient. Overall, this study can easily be compared to other tropical marine settings investigating the long-term effects of tropical eutrophication and the biogeographic distribution of carbonate producing organisms.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
The quality of a heuristic solution to a NP-hard combinatorial problem is hard to assess. A few studies have advocated and tested statistical bounds as a method for assessment. These studies indicate that statistical bounds are superior to the more widely known and used deterministic bounds. However, the previous studies have been limited to a few metaheuristics and combinatorial problems and, hence, the general performance of statistical bounds in combinatorial optimization remains an open question. This work complements the existing literature on statistical bounds by testing them on the metaheuristic Greedy Randomized Adaptive Search Procedures (GRASP) and four combinatorial problems. Our findings confirm previous results that statistical bounds are reliable for the p-median problem, while we note that they also seem reliable for the set covering problem. For the quadratic assignment problem, the statistical bounds has previously been found reliable when obtained from the Genetic algorithm whereas in this work they found less reliable. Finally, we provide statistical bounds to four 2-path network design problem instances for which the optimum is currently unknown.
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
This is a book on general issues to do with assessment, not a statistics book, although it touches on statistical issues. Although it is intended for an international readership to include teachers, educationalists, policy makers and personnel managers and would be of interest to such groups it has been written mainly from the viewpoint of developed countries and is not obviously essential reading or an essential purchase for anyone. Short biographies of the 15 contributors are given at the start of the book and references from all the chapters are brought together at the end. The order of chapters appears to be arbitrary to some extent and there is little cross-referencing from one chapter to another.
Resumo:
In Czech schools two teaching methods of reading are used: the analytic-synthetic (conventional) and genetic (created in the 1990s). They differ in theoretical foundations and in methodology. The aim of this paper is to describe the above mentioned theoretical approaches and present the results of study that followed the differences in the development of initial reading skills between these methods. A total of 452 first grade children (age 6-8) were assessed by a battery of reading tests at the beginning and at the end of the first grade and at the beginning of the second grade. 350 pupils participated all three times. Based on data analysis the developmental dynamics of reading skills in both methods and the main differences in several aspects of reading abilities (e.g. the speed of reading, reading technique, error rate in reading) are described. The main focus is on the reading comprehension development. Results show that pupils instructed using genetic approach scored significantly better on used reading comprehension tests, especially in the first grade. Statistically significant differences occurred between classes independently of each method. Therefore, other factors such as teacher´s role and class composition are discussed.
Study of white spot disease in four native species in Persian Gulf by histopathology and PCR methods
Resumo:
After serious disease outbreak, caused by new virus (WSV), has been occurring among cultured penaeid shrimps in Asian countries like China since 1993 and then in Latin American countries, during June till July 2002 a rapid and high mortality in cultured Penaeus indicus in Abadan region located in south of Iran with typical signs and symptoms of White Spot Syndrome Virus was confirmed by different studies of Histopathology, PCR, TEM, Virology. This study was conducted for the purpose of determination of prevalence(rate of infection)/ROI and grading severity (SOI) of WSD to five species: 150 samples of captured shrimps and 90 samples of cultured ones; Penaeus indicus, P. semisulcatus, P. merguiensis, Parapenaopsis styliferus, and Metapenaeus affinis in 2005. 136 of 240 samples have shown clinical and macroscopical signs & symptoms including; white spots on carapase (0.5-2 mm), easily removing of cuticule, fragility of hepatopancreas and red color of motility limbs. Histopathological changes like specific intranuclear inclusion bodies (cowdry-type A) were observed in all target tissues (gill, epidermis, haemolymph and midgut) but not in hepatopancreas, among shrimps collected from various farms in the south and captured ones from Persian Gulf, even ones without clinical signs. ROI among species estimated, using the NATIVIDAD & LIGHTNER formula(1992b) and SOI were graded, using a generalized scheme for assigning a numerical qualitative value to severity grade of infection which was provided by LIGHTNER(1996), in consideration to histopathology and counting specific inclusion bodies in different stages(were modified by B. Gholamhoseini). Samples with clinical signs, showed grades more than 2. Most of the P. semisulcatus and M. affinis samples showed grade of 3, in the other hand in most of P. styliferus samples grade of 4 were observed, which can suggest different sensitivity of different species. All samples were tested by Nested PCR method with IQTm 2000 WSSV kit and 183 of 240 samples were positive and 3 1evel of infection which was shown in this PCR confirmed our SOI grades, but they were more specified.