864 resultados para Production Inventory Model with Switching Time
Resumo:
A first stage collision database is assembled which contains electron-impact excitation, ionization,\r and recombination rate coefficients for B, B + , B 2+ , B 3+ , and B 4+ . The first stage database\r is constructed using the R-matrix with pseudostates, time-dependent close-coupling, and perturbative\r distorted-wave methods. A second stage collision database is then assembled which contains\r generalized collisional-radiative ionization, recombination, and power loss rate coefficients as a\r function of both temperature and density. The second stage database is constructed by solution of\r the collisional-radiative equations in the quasi-static equilibrium approximation using the first\r stage database. Both collision database stages reside in electronic form at the IAEA Labeled Atomic\r Data Interface (ALADDIN) database and the Atomic Data Analysis Structure (ADAS) open database.
Resumo:
O transporte marítimo e o principal meio de transporte de mercadorias em todo o mundo. Combustíveis e produtos petrolíferos representam grande parte das mercadorias transportadas por via marítima. Sendo Cabo Verde um arquipelago o transporte por mar desempenha um papel de grande relevância na economia do país. Consideramos o problema da distribuicao de combustíveis em Cabo Verde, onde uma companhia e responsavel por coordenar a distribuicao de produtos petrolíferos com a gestão dos respetivos níveis armazenados em cada porto, de modo a satisfazer a procura dos varios produtos. O objetivo consiste em determinar políticas de distribuicão de combustíveis que minimizam o custo total de distribuiçao (transporte e operacões) enquanto os n íveis de armazenamento sao mantidos nos n íveis desejados. Por conveniencia, de acordo com o planeamento temporal, o prob¬lema e divido em dois sub-problemas interligados. Um de curto prazo e outro de medio prazo. Para o problema de curto prazo sao discutidos modelos matemáticos de programacao inteira mista, que consideram simultaneamente uma medicao temporal cont ínua e uma discreta de modo a modelar multiplas janelas temporais e taxas de consumo que variam diariamente. Os modelos sao fortalecidos com a inclusão de desigualdades validas. O problema e então resolvido usando um "software" comercial. Para o problema de medio prazo sao inicialmente discutidos e comparados varios modelos de programacao inteira mista para um horizonte temporal curto assumindo agora uma taxa de consumo constante, e sao introduzidas novas desigualdades validas. Com base no modelo escolhido sao compara¬das estrategias heurísticas que combinam três heur ísticas bem conhecidas: "Rolling Horizon", "Feasibility Pump" e "Local Branching", de modo a gerar boas soluçoes admissíveis para planeamentos com horizontes temporais de varios meses. Finalmente, de modo a lidar com situaçoes imprevistas, mas impor¬tantes no transporte marítimo, como as mas condicões meteorológicas e congestionamento dos portos, apresentamos um modelo estocastico para um problema de curto prazo, onde os tempos de viagens e os tempos de espera nos portos sao aleatórios. O problema e formulado como um modelo em duas etapas, onde na primeira etapa sao tomadas as decisões relativas as rotas do navio e quantidades a carregar e descarregar e na segunda etapa (designada por sub-problema) sao consideradas as decisoes (com recurso) relativas ao escalonamento das operacões. O problema e resolvido por um metodo de decomposto que usa um algoritmo eficiente para separar as desigualdades violadas no sub-problema.
Resumo:
Although the genetic code is generally viewed as immutable, alterations to its standard form occur in the three domains of life. A remarkable alteration to the standard genetic code occurs in many fungi of the Saccharomycotina CTG clade where the Leucine CUG codon has been reassigned to Serine by a novel transfer RNA (Ser-tRNACAG). The host laboratory made a major breakthrough by reversing this atypical genetic code alteration in the human pathogen Candida albicans using a combination of tRNA engineering, gene recombination and forced evolution. These results raised the hypothesis that synthetic codon ambiguities combined with experimental evolution may release codons from their frozen state. In this thesis we tested this hypothesis using S. cerevisiae as a model system. We generated ambiguity at specific codons in a two-step approach, involving deletion of tRNA genes followed by expression of non-cognate tRNAs that are able to compensate the deleted tRNA. Driven by the notion that rare codons are more susceptible to reassignment than those that are frequently used, we used two deletion strains where there is no cognate tRNA to decode the rare CUC-Leu codon and AGG-Arg codon. We exploited the vulnerability of the latter by engineering mutant tRNAs that misincorporate Ser at these sites. These recombinant strains were evolved over time using experimental evolution. Although there was a strong negative impact on the growth rate of strains expressing mutant tRNAs at high level, such expression at low level had little effect on cell fitness. We found that not only codon ambiguity, but also destabilization of the endogenous tRNA pool has a strong negative impact in growth rate. After evolution, strains expressing the mutant tRNA at high level recovered significantly in several growth parameters, showing that these strains adapt and exhibit higher tolerance to codon ambiguity. A fluorescent reporter system allowing the monitoring of Ser misincorporation showed that serine was indeed incorporated and possibly codon reassignment was achieved. Beside the overall negative consequences of codon ambiguity, we demonstrated that codons that tolerate the loss of their cognate tRNA can also tolerate high Ser misincorporation. This raises the hypothesis that these codons can be reassigned to standard and eventually to new amino acids for the production of proteins with novel properties, contributing to the field of synthetic biology and biotechnology.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
Forecasting future sales is one of the most important issues that is beyond all strategic and planning decisions in effective operations of retail businesses. For profitable retail businesses, accurate demand forecasting is crucial in organizing and planning production, purchasing, transportation and labor force. Retail sales series belong to a special type of time series that typically contain trend and seasonal patterns, presenting challenges in developing effective forecasting models. This work compares the forecasting performance of state space models and ARIMA models. The forecasting performance is demonstrated through a case study of retail sales of five different categories of women footwear: Boots, Booties, Flats, Sandals and Shoes. On both methodologies the model with the minimum value of Akaike's Information Criteria for the in-sample period was selected from all admissible models for further evaluation in the out-of-sample. Both one-step and multiple-step forecasts were produced. The results show that when an automatic algorithm the overall out-of-sample forecasting performance of state space and ARIMA models evaluated via RMSE, MAE and MAPE is quite similar on both one-step and multi-step forecasts. We also conclude that state space and ARIMA produce coverage probabilities that are close to the nominal rates for both one-step and multi-step forecasts.
Resumo:
As centrais termoelétricas convencionais convertem apenas parte do combustível consumido na produção de energia elétrica, sendo que outra parte resulta em perdas sob a forma de calor. Neste sentido, surgiram as unidades de cogeração, ou Combined Heat and Power (CHP), que permitem reaproveitar a energia dissipada sob a forma de energia térmica e disponibilizá-la, em conjunto com a energia elétrica gerada, para consumo doméstico ou industrial, tornando-as mais eficientes que as unidades convencionais Os custos de produção de energia elétrica e de calor das unidades CHP são representados por uma função não-linear e apresentam uma região de operação admissível que pode ser convexa ou não-convexa, dependendo das caraterísticas de cada unidade. Por estas razões, a modelação de unidades CHP no âmbito do escalonamento de geradores elétricos (na literatura inglesa Unit Commitment Problem (UCP)) tem especial relevância para as empresas que possuem, também, este tipo de unidades. Estas empresas têm como objetivo definir, entre as unidades CHP e as unidades que apenas geram energia elétrica ou calor, quais devem ser ligadas e os respetivos níveis de produção para satisfazer a procura de energia elétrica e de calor a um custo mínimo. Neste documento são propostos dois modelos de programação inteira mista para o UCP com inclusão de unidades de cogeração: um modelo não-linear que inclui a função real de custo de produção das unidades CHP e um modelo que propõe uma linearização da referida função baseada na combinação convexa de um número pré-definido de pontos extremos. Em ambos os modelos a região de operação admissível não-convexa é modelada através da divisão desta àrea em duas àreas convexas distintas. Testes computacionais efetuados com ambos os modelos para várias instâncias permitiram verificar a eficiência do modelo linear proposto. Este modelo permitiu obter as soluções ótimas do modelo não-linear com tempos computationais significativamente menores. Para além disso, ambos os modelos foram testados com e sem a inclusão de restrições de tomada e deslastre de carga, permitindo concluir que este tipo de restrições aumenta a complexidade do problema sendo que o tempo computacional exigido para a resolução do mesmo cresce significativamente.
Resumo:
Different oil-containing substrates, namely, used cooking oil (UCO), fatty acids-byproduct from biodiesel production (FAB) and olive oil deodorizer distillate (OODD) were tested as inexpensive carbon sources for the production of polyhydroxyalkanoates (PHA) using twelve bacterial strains, in batch experiments. The OODD and FAB were exploited for the first time as alternative substrates for PHA production. Among the tested bacterial strains, Cupriavidus necator and Pseudomonas resinovorans exhibited the most promising results, producing poly-3-hydroxybutyrate, P(3HB), form UCO and OODD and mcl-PHA mainly composed of 3-hydroxyoctanoate (3HO) and 3-hydroxydecanoate (3HD) monomers from OODD, respectively. Afterwards, these bacterial strains were cultivated in bioreactor. C. necator were cultivated in bioreactor using UCO as carbon source. Different feeding strategies were tested for the bioreactor cultivation of C. necator, namely, batch, exponential feeding and DO-stat mode. The highest overall PHA productivity (12.6±0.78 g L-1 day-1) was obtained using DO-stat mode. Apparently, the different feeding regimes had no impact on polymer thermal properties. However, differences in polymer‟s molecular mass distribution were observed. C. necator was also tested in batch and fed-batch modes using a different type of oil-containing substrate, extracted from spent coffee grounds (SCG) by super critical carbon dioxide (sc-CO2). Under fed-batch mode (DO-stat), the overall PHA productivity were 4.7 g L-1 day-1 with a storage yield of 0.77 g g-1. Results showed that SCG can be a bioresource for production of PHA with interesting properties. Furthermore, P. resinovorans was cultivated using OODD as substrate in bioreactor under fed-batch mode (pulse feeding regime). The polymer was highly amorphous, as shown by its low crystallinity of 6±0.2%, with low melting and glass transition temperatures of 36±1.2 and -16±0.8 ºC, respectively. Due to its sticky behavior at room temperature, adhesiveness and mechanical properties were also studied. Its shear bond strength for wood (67±9.4 kPa) and glass (65±7.3 kPa) suggests it may be used for the development of biobased glues. Bioreactor operation and monitoring with oil-containing substrates is very challenging, since this substrate is water immiscible. Thus, near-infrared spectroscopy (NIR) was implemented for online monitoring of the C. necator cultivation with UCO, using a transflectance probe. Partial least squares (PLS) regression was applied to relate NIR spectra with biomass, UCO and PHA concentrations in the broth. The NIR predictions were compared with values obtained by offline reference methods. Prediction errors to these parameters were 1.18 g L-1, 2.37 g L-1 and 1.58 g L-1 for biomass, UCO and PHA, respectively, which indicates the suitability of the NIR spectroscopy method for online monitoring and as a method to assist bioreactor control. UCO and OODD are low cost substrates with potential to be used in PHA batch and fed-batch production. The use of NIR in this bioprocess also opened an opportunity for optimization and control of PHA production process.
Resumo:
This study investigates three questions related to medical practice variation. First, it tests whether average length of stay across Portuguese National Health Service hospitals varies when controlling for differences in patients’ characteristics. Second, it looks at hospital-level characteristics in order to find out whether these are able to explain differences in average length of stay across hospitals. Finally, it proposes a best practice average length of stay for each of the six episodes of care analyzed. To perform the analysis, administrative data from the Diagnosis-Related groups’ data set for the year of 2012 was used. A replication of a hierarchical two-stage model with hospital fixed effects was carried out. The results show that after taking patients’ characteristics into account, variation in average length of stay across hospitals exists. This variation cannot be explained by hospital-level characteristics.
Resumo:
Polyhydroxyalkanoates (PHAs) are natural biologically synthesized polymers that have been the subject of much interest in the last decades due to their biodegradability. Thus far, its microbial production is associated with high operational costs, which increases PHA prices and limits its marketability. To address this situation, this thesis’ work proposes the utilization of photosynthetic mixed cultures (PMC) as a new PHA production system that may lead to a reduction in operational costs. In fact, the operational strategies developed in this work led to the selection of PHA accumulating PMCs that, unlike the traditional mixed microbial cultures, do not require aeration, thus permitting savings in this significant operational cost. In particular, the first PHA accumulating PMC tested in this work was selected under non-aerated illuminated conditions in a feast and famine regime, being obtained a consortium of bacteria and algae, where photosynthetic bacteria accumulated PHA during the feast phase and consumed it for growth during the famine phase, using the oxygen produced by algae. In this symbiotic system, a maximum PHA content of 20% cell dry weight (cdw) was reached, proving for the first time, the capacity of a PMC to accumulate PHA. During adaptation to dark/light alternating conditions, the culture decreased its algae content but maintained its viability, achieving a PHA content of 30% cdw. Also, the PMC was found to be able to utilize different volatile fatty acids for PHA production, accumulating up to 20% cdw of a PHA co-polymer composed of 3-hydroxybutyrate (3HB) and 3-hydroxyvalerate (HV) monomers. Finally, a new selective approach for the enrichment of PMCs in PHA accumulating bacteria was tested. Instead of imposing a feast and famine regime, a permanent feast regime was used, thus selecting a PMC that was capable of simultaneously growing and accumulating PHA, being attained a maximum PHA content of 60% cdw, the highest value reported for a PMC thus far. The results presented in this thesis prospect the utilization of cheap, VFA-rich fermented wastes as substrates for PHA production, which combined with this new photosynthetic technology opens up the possibility for direct sunlight illumination, leading to a more cost-effective and environmentally sustainable PHA production process.
Resumo:
Animal models of infective endocarditis (IE) induced by high-grade bacteremia revealed the pathogenic roles of Staphylococcus aureus surface adhesins and platelet aggregation in the infection process. In humans, however, S. aureus IE possibly occurs through repeated bouts of low-grade bacteremia from a colonized site or intravenous device. Here we used a rat model of IE induced by continuous low-grade bacteremia to explore further the contributions of S. aureus virulence factors to the initiation of IE. Rats with aortic vegetations were inoculated by continuous intravenous infusion (0.0017 ml/min over 10 h) with 10(6) CFU of Lactococcus lactis pIL253 or a recombinant L. lactis strain expressing an individual S. aureus surface protein (ClfA, FnbpA, BCD, or SdrE) conferring a particular adhesive or platelet aggregation property. Vegetation infection was assessed 24 h later. Plasma was collected at 0, 2, and 6 h postinoculation to quantify the expression of tumor necrosis factor (TNF), interleukin 1α (IL-1α), IL-1β, IL-6, and IL-10. The percentage of vegetation infection relative to that with strain pIL253 (11%) increased when binding to fibrinogen was conferred on L. lactis (ClfA strain) (52%; P = 0.007) and increased further with adhesion to fibronectin (FnbpA strain) (75%; P < 0.001). Expression of fibronectin binding alone was not sufficient to induce IE (BCD strain) (10% of infection). Platelet aggregation increased the risk of vegetation infection (SdrE strain) (30%). Conferring adhesion to fibrinogen and fibronectin favored IL-1β and IL-6 production. Our results, with a model of IE induced by low-grade bacteremia, resembling human disease, extend the essential role of fibrinogen binding in the initiation of S. aureus IE. Triggering of platelet aggregation or an inflammatory response may contribute to or promote the development of IE.
Resumo:
Fifty-six percent of Canadians, 20 years of age and older, are inactive (Canadian Community Health Survey, 200012001). Research has indicated that one of the most dramatic declines in population physical activity occurs between adolescence and young adulthood (Melina, 2001; Stephens, Jacobs, & White, 1985), a time when individuals this age are entering or attending college or university. Colleges and universities have generally been seen as environments where physical activity and sport can be promoted and accommodated as a result of the available resources and facilities (Archer, Probert, & Gagne, 1987; Suminski, Petosa, Utter, & Zhang, 2002). Intramural sports, one of the most common campus recreational sports options available for post-secondary students, enable students to participate in activities that are suited for different levels of ability and interest (Lewis, Jones, Lamke, & Dunn, 1998). While intramural sports can positively affect the physical activity levels and sport participation rates of post-secondary students, their true value lies in their ability to encourage sport participation after school ends and during the post-school lives of graduates (Forrester, Ross, Geary, & Hall, 2007). This study used the Sport Commitment Model (Scanlan et aI., 1993a) and the Theory of Planned Behaviour (Ajzen, 1991) with post secondary intramural volleyball participants in an effort to examine students' commitment to intramural sport and 1 intentions to participate in intramural sports. More specifically, the research objectives of this study were to: (1.) test the Sport Commitment Model with a sample of postsecondary intramural sport participants(2.) determine the utility of the sixth construct, social support, in explaining the sport commitment of post-secondary intramural sport participants; (3.) determine if there are any significant differences in the six constructs of IV the SCM and sport commitment between: gender, level of competition (competitive A vs. B), and number of different intramural sports played; (4.) determine if there are any significant differences between sport commitment levels and constructs from the Theory of Planned Behaviour (attitudes, subjective norms, perceived behavioural control, and intentions); (5.) determine the relationship between sport commitment and intention to continue participation in intramural volleyball, continue participating in intramurals and continuing participating in sport and physical activity after graduation; and (6.) determine if the level of sport commitment changes the relationship between the constructs from the Theory of Planned Behaviour. Of the 318 surveys distributed, there were 302 partiCipants who completed a usable survey from the sample of post-secondary intramural sport participants. There was a fairly even split of males and females; the average age of the students was twenty-one; 90% were undergraduate students; for approximately 25% of the students, volleyball was the only intramural sport they participated in at Brock and most were part of the volleyball competitive B division. Based on the post-secondary students responses, there are indications of intent to continue participation in sport and physical activity. The participation of the students is predominantly influenced by subjective norms, high sport commitment, and high sport enjoyment. This implies students expect, intend and want to 1 participate in intramurals in the future, they are very dedicated to playing on an intramural team and would be willing to do a lot to keep playing and students want to participate when they perceive their pursuits as enjoyable and fun, and it makes them happy. These are key areas that should be targeted and pursued by sport practitioners.
Resumo:
This paper studies the transition between exchange rate regimes using a Markov chain model with time-varying transition probabilities. The probabilities are parameterized as nonlinear functions of variables suggested by the currency crisis and optimal currency area literature. Results using annual data indicate that inflation, and to a lesser extent, output growth and trade openness help explain the exchange rate regime transition dynamics.
Resumo:
This paper constructs and estimates a sticky-price, Dynamic Stochastic General Equilibrium model with heterogenous production sectors. Sectors differ in price stickiness, capital-adjustment costs and production technology, and use output from each other as material and investment inputs following an Input-Output Matrix and Capital Flow Table that represent the U.S. economy. By relaxing the standard assumption of symmetry, this model allows different sectoral dynamics in response to monetary policy shocks. The model is estimated by Simulated Method of Moments using sectoral and aggregate U.S. time series. Results indicate 1) substantial heterogeneity in price stickiness across sectors, with quantitatively larger differences between services and goods than previously found in micro studies that focus on final goods alone, 2) a strong sensitivity to monetary policy shocks on the part of construction and durable manufacturing, and 3) similar quantitative predictions at the aggregate level by the multi-sector model and a standard model that assumes symmetry across sectors.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.