922 resultados para Data replication processes
Resumo:
The effects of body condition recovery (BC), carcass electrical stimulation (ES), aging time (AT 7 - 14 days), and calcium chloride injection on the meat characteristics of Santa Inês ewes (±5 years old) slaughtered immediately after weaning or after the body condition recovery period were studied. The carcass temperature, pH, shear force (SF), cooking loss (CL), meat color (L*, a*, b*), and meat tenderness were evaluated. A completely randomized design in a 2 × 2 × 2 × 3 (BC × ES × CaCl2 × AT) factorial arrangement was used, and the sensory tenderness data were analyzed using the table of Minimum Number of Correct Answers for the Duo-Trio test. The body condition recovery reduces the shear force in 8%, increasing their tenderness. Electrical stimulation reduced the shear force (24%) and did not change the other parameters. The aging time (7 or 14 days) decreased the shear force (18-26%), effect that was enhanced by electrical stimulation, and it darkened the meat reducing lightness (L*) and increasing yellowness (b*). The treatment with CaCl2 was the most effective in tenderizing meat by reducing the shear force ( 35%); increasing the cooking loss (4.5%); and increasing L* and b* lightening the meat. The sensory evaluation of tenderness corroborates the findings of the experimental evaluation regarding the effect of the treatment with CaCl2 on the meat quality improvement. It was concluded that the treatments improve meat characteristics achieving better results when applied together.
Resumo:
Sustainability and recycling are core values in today’s industrial operations. New materials, products and processes need to be designed in such a way as to consume fewer of the diminishing resources we have available and to put as little strain on the environment as possible. An integral part of this is cleaning and recycling. New processes are to be designed to improve the efficiency in this aspect. Wastewater, including municipal wastewaters, is treated in several steps including chemical and mechanical cleaning of waters. Well-cleaned water can be recycled and reused. Clean water for everyone is one of the greatest challenges we are facing today. Ferric sulphate, made by oxidation from ferrous sulphate, is used in water purification. The oxidation of ferrous sulphate, FeSO4, to ferric sulphate in acidic aqueous solutions of H2SO4 over finely dispersed active carbon particles was studied in a vigorously stirred batch reactor. Molecular oxygen was used as the oxidation agent and several catalysts were screened: active carbon, active carbon impregnated with Pt, Rh, Pd and Ru. Both active carbon and noble metal-active carbon catalysts enhanced the oxidation rate considerably. The order of the noble metals according to the effect was: Pt >> Rh > Pd, Ru. By the use of catalysts, the production capacities of existing oxidation units can be considerably increased. Good coagulants have a high charge on a long polymer chain effectively capturing dirty particles of the opposite charge. Analysis of the reaction product indicated that it is possible to obtain polymeric iron-based products with good coagulation properties. Systematic kinetic experiments were carried out at the temperature and pressure ranges of 60B100°C and 4B10 bar, respectively. The results revealed that both non-catalytic and catalytic oxidation of Fe2+ to Fe3+ take place simultaneously. The experimental data were fitted to rate equations, which were based on a plausible reaction mechanism: adsorption of dissolved oxygen on active carbon, electron transfer from Fe2+ ions to adsorbed oxygen and formation of surface hydroxyls. A comparison of the Fe2+ concentrations predicted by the kinetic model with the experimentally observed concentrations indicated that the mechanistic rate equations were able to describe the intrinsic oxidation kinetics of Fe2+ over active carbon and active carbon-noble metal catalysts. Engineering aspects were closely considered and effort was directed to utilizing existing equipment in the production of the new coagulant. Ferrous sulphate can be catalytically oxidized to produce a novel long-chained polymeric iron-based flocculent in an easy and affordable way in existing facilities. The results can be used for modelling the reactors and for scale-up. Ferric iron (Fe3+) was successfully applied for the dissolution of sphalerite. Sphalerite contains indium, gallium and germanium, among others, and the application can promote their recovery. The understanding of the reduction process of ferric to ferrous iron can be used to develop further the understanding of the dissolution mechanisms and oxidation of ferrous sulphate. Indium, gallium and germanium face an ever-increasing demand in the electronics industry, among others. The supply is, however, very limited. The fact that most part of the material is obtained through secondary production means that real production quota depends on the primary material production. This also sets the pricing. The primary production material is in most cases zinc and aluminium. Recycling of scrap material and the utilization of industrial waste, containing indium, gallium and geranium, is a necessity without real options. As a part of this study plausible methods for the recovery of indium, gallium and germanium have been studied. The results were encouraging and provided information about the precipitation of these valuables from highly acidic solutions. Indium and gallium were separated from acidic sulphuric acid solutions by precipitation with basic sulphates such as alunite or they were precipitated as basic sulphates of their own as galliunite and indiunite. Germanium may precipitate as a basic sulphate of a mixed composition. The precipitation is rapid and the selectivity is good. When the solutions contain both indium and gallium then the results show that gallium should be separated before indium to achieve a better selectivity. Germanium was separated from highly acidic sulphuric acid solutions containing other metals as well by precipitating with tannic acid. This is a highly selective method. According to the study other commonly found metals in the solution do not affect germanium precipitation. The reduction of ferric iron to ferrous, the precipitation of indium, gallium and germanium, and the dissolution of the raw materials are strongly depending on temperature and pH. The temperature and pH effect were studied and which contributed to the understanding and design of the different process steps. Increased temperature and reduced pH improve the reduction rate. Finally, the gained understanding in the studied areas can be employed to develop better industrial processes not only on a large scale but also increasingly on a smaller scale. The small amounts of indium, gallium and germanium may favour smaller and more locally bound recovery.
Resumo:
This study uses the Life Cycle Assessment (LCA) methodology to evaluate and compare the environmental impacts caused by both the artisanal and the industrial manufacturing processes of "Minas cheese". This is a traditional cheese produced in the state of Minas Gerais (Brazil), and it is considered a "cultural patrimony" in the country. The high participation of artisanal producers in the market justifies this research, and this analysis can help the identification of opportunities to improve the environmental performance of several stages of the production system. The environmental impacts caused were also assessed and compared. The functional unit adopted was 1 kilogram (Kg) of cheese. The system boundaries considered were the production process, conservation of product (before sale), and transport to consumer market. The milk production process was considered similar in both cases, and therefore it was not included in the assessment. The data were collected through interviews with the producers, observation, and a literature review; they were ordered and processed using the SimaPro 7 LCA software. According to the impact categories analyzed, the artisanal production exerted lower environmental impacts. This can be justified mainly because the industrial process includes the pasteurization stage, which uses dry wood as an energy source and refrigeration.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
The purpose of this thesis is to find out how outbound logistics process can be improved by reducing unnecessary waste in a globally dispersed make-to-order (MTO) supply chain. The research problem was addressed by a multinational corporation that aims to find a solution for reducing unnecessary waste in their outbound logistics process. The focus is on customized products that are delivered via sea transportation. Theoretical framework for improving outbound logistics processes in globally dispersed MTO supply chain was created based on business process management, Porter’s value chain theory, value stream mapping and current reality tree. The empirical research was conducted by using constructive approach due to its ability to research a practical problem and to improve the existing practices. The data was collected from ten semi-structured interviews and three non-participant observations. By analysing the data and applying the theoretical framework, five types of waste were detected in the process that were seen to derive from six root causes. Practical solution was constructed to reduce the waste in the process by combining the existing literature with the ideas raising from empirical data. The results of this thesis suggest that a MNC with a globally dispersed MTO supply chain can improve its outbound logistics process by applying activities that enhance internal and external integration, collaboration and coordination, and increase predictability of the process. This research has practical relevance both for the case company as well as for other MNCs with globally dispersed MTO supply chains that aim to improve their outbound logistics processes. This research contributes to the BPM and CRA research by providing an evidence for their applicability in the new context.
Resumo:
This study is motivated by the question how resource scarce innovative entrepreneurial companies seek and leverage global resources. This study takes a resource-seeking perspective a step forward and suggests that resources that enable the entrepreneurial internationalisation are largely accrued from the early stages of entrepreneurial life; that is from the innovation development. Consequently, this study seeks to explain how innovation and internationalisation processes are interrelated in the entrepreneurial internationalisation. This main objective is approached through three research questions, (1) What role do inter-organisational relationships in innovation have in the entrepreneurial internationalisation process? (2) What kind of inward–outward links do inter-organisational relationships create in the resource-seeking-based entrepreneurial internationalisation process? (3) What kind of capability to collaborate forms in the interaction of inter-organisational relationship deployment? The research design is a mixed methods design that consists of quantitative pilot study and qualitative multiple case study of five entrepreneurial life science companies from Finland and Austria. The findings show that innovation and internationalisation processes are tightly interwoven in pre-internationalisation state. The findings also reveal that the more experienced companies are able to take advantage of complexcross-border inter-organisational relationship structures better than the starting companies. However, very minor evidence was found on inward links translating into outward links in the entrepreneurial internationalisation process, despite the expectation to observe more of these links in the data. Combined intangible-tangible resource-seeking was the most preferred to build links between inward–outward internationalisation but also to develop competence to collaborate. By adopting a resource- instead of market-seeking approach, this study illustrated that internationalisation extends to early stages of innovative companies, and that in high-technology companies’ potentially significant cross-border relationships have started to form long before incorporation. Therefore, these observations justified the firmer inclusion of pre-company history in innovative entrepreneurship studies. The study offers a conceptualisation of entrepreneurial internationalisation that is perceived as a process. The main theoretical contributions are in the areas of international entrepreneurship and in the behavioural process studies of entrepreneurial internationalisation and resource-based internationalisation. The inclusion of the innovation-based discussion, namely the innovation process, in the internationalisation process theories has clearly contributed to the understanding of entrepreneurial internationalisation in the context of international entrepreneurship. Innovation development is a central act of entrepreneurial companies, and neglecting innovation process investigation from entrepreneurial internationalisation leaves potentially influential mechanisms unexplored.
Resumo:
The purpose of this study is to find out how laser based Directed Energy Deposition processes can benefit from different types of monitoring. DED is a type of additive manufacturing process, where parts are manufactured in layers by using metallic powder or metallic wire. DED processes can be used to manufacture parts that are not possible to manufacture with conventional manufacturing processes, when adding new geometries to existing parts or when wanting to minimize the scrap material that would result from machining the part. The aim of this study is to find out why laser based DED-processes are monitored, how they are monitored and what devices are used for monitoring. This study has been done in the form of a literature review. During the manufacturing process, the DED-process is highly sensitive to different disturbances such as fluctuations in laser absorption, powder feed rate, temperature, humidity or the reflectivity of the melt pool. These fluctuations can cause fluctuations in the size of the melt pool or its temperature. The variations in the size of the melt pool have an effect on the thickness of individual layers, which have a direct impact on the final surface quality and dimensional accuracy of the parts. By collecting data from these fluctuations and adjusting the laser power in real-time, the size of the melt pool and its temperature can be kept within a specified range that leads to significant improvements in the manufacturing quality. The main areas of monitoring can be divided into the monitoring of the powder feed rate, the temperature of the melt pool, the height of the melt pool and the geometry of the melt pool. Monitoring the powder feed rate is important when depositing different material compositions. Monitoring the temperature of the melt pool can give information about the microstructure and mechanical properties of the part. Monitoring the height and the geometry of the melt pool is an important factor in achieving the desired dimensional accuracy of the part. By combining multiple different monitoring devices, the amount of fluctuations that can be controlled will be increased. In addition, by combining additive manufacturing with machining, the benefits of both processes could be utilized.
Resumo:
After sales business is an effective way to create profit and increase customer satisfaction in manufacturing companies. Despite this, some special business characteristics that are linked to these functions, make it exceptionally challenging in its own way. This Master’s Thesis examines the current situation of the data and inventory management in the case company regarding possibilities and challenges related to the consolidation of current business operations. The research examines process steps, procedures, data requirements, data mining practices and data storage management of spare part sales process, whereas the part focusing on inventory management is reviewing the current stock value and examining current practices and operational principles. There are two global after sales units which supply spare parts and issues reviewed in this study are examined from both units’ perspective. The analysis is focused on the operations of that unit where functions would be centralized by default, if change decisions are carried out. It was discovered that both data and inventory management include clear shortcomings, which result from lack of internal instructions and established processes as well as lack of cooperation with other stakeholders related to product’s lifecycle. The main product of data management was a guideline for consolidating the functions, tailored for the company’s needs. Additionally, potentially scrapped spare part were listed and a proposal of inventory management instructions was drafted. If the suggested spare part materials will be scrapped, stock value will decrease 46 percent. A guideline which was reviewed and commented in this thesis was chosen as the basis of the inventory management instructions.
Resumo:
The relationship between the child's cogni tive development and neurological maturation has been of theoretical interest for many year s. Due to diff iculties such as the lack of sophisticated techniques for measur ing neurolog ical changes and a paucity of normative data, few studies exist that have attempted to correlate the two factors. Recent theory on intellectual development has proposed that neurological maturation may be a factor in the increase of short-term memory storage space. Improved technology has allowed reliable recordings of neurolog ical maturation.. In an attempt to correlate cogni tive development and neurological maturation, this study tested 3-and II-year old children. Fine motor and gross motor short-term memory tests were used to index cogni tive development. Somatosensory evoked potentials elici ted by median nerve stimulation were used to measure the time required for the sensation to pass along the nerve to specific points on the somatosensory pathway. Times were recorded for N14, N20, and P22 interpeak latencies. Maturation of the central nervous system (brain and spinal cord) and the peripheral nervous system (outside the brain and spinal cord) was indi~ated by the recorded times. Signif icant developmental di fferences occurred between 3-and ll-year-olds in memory levels, per ipheral conduction velocity and central conduction times. Linear regression analyses showed that as age increased, memory levels increased and central conduction times decreased. Between the ll-year-old groups, there were no significant differences in central or peripheral nervous system maturation between subjects who achieved a 12 plus score on the digit span test of the WISC-R and those who scored 7 or lower on the same test. Levels achieved on the experimental gross and fine motor short-term memory tests differed significantly within the ll-year-old group.
Resumo:
While there has been a recent shift away from isolated, institutionalized living conditions, persons with Intellectual Disabilities (ID) may still experience restricted access to choice when it comes to making decisions about the basic aspects of their lives. A tension remains between protecting individuals from harm and promoting their right to independence and personal liberties. This tension creates complex questions and ethical concerns for care providers supporting persons with ID. This study explored the ethical decision-making processes of care providers and specifically, how care providers describe the balance of protecting supported individuals from harm while promoting their right to self-determination. Semi-structured interviews were conducted with six care providers employed by a local community agency that supports young and older adults with ID. Data were analysed using thematic analysis and broader themes were developed following phases of open and selective coding. Results indicated that care providers described ethical decision-making processes as frequent, complex, subjective, and uncomfortable. All participants described the importance of promoting independent decision-making among the individuals they support and assisting supported individuals to make informed decisions. Participants also reported work colleagues and supervisors as primary sources of information when resolving ethical concerns. This suggests that complex ethical decision-making processes are being taken seriously by care providers and supervising staff. The results of this study are well-positioned to be applied to the development of a training program for frontline care providing staff supporting individuals in community care settings.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Le virus de l'hépatite C (VHC) touche 3% de la population mondiale et environ 30% des patients chroniquement infectés développeront une fibrose hépatique. Son génome est un ARN simple brin de polarité positive qui possède un cadre ouvert de lecture flanqué de deux régions non traduites hautement conservées. Différents facteurs peuvent influencer le cycle de réplication du VHC. Deux d’entre eux ont été étudiés dans cette thèse. Tout d'abord, nous nous sommes intéressés à l'effet des structures secondaires et tertiaires du génome sur la réplication du VHC. Les extrémités 5' et 3' du génome contiennent des structures ARN qui régulent la traduction et la réplication du VHC. Le 3'UTR est un élément structural très important pour la réplication virale. Cette région est constituée d’une région variable, d’une séquence poly(U/C) et d’un domaine hautement conservé appelé région X. Des études in vitro ont montré que le 3'UTR possède plusieurs structures ARN double brin. Cependant, les structures ARN telles qu'elles existent dans le 3'UTR dans un contexte de génome entier et dans des conditions biologiques étaient inconnues. Pour élucider cette question, nous avons développé une méthode in situ pour localiser les régions ARN simple brin et double brin dans le 3'UTR du génome du VHC. Comme prédit par les études antérieures, nous avons observé qu’in situ la région X du 3’UTR du génome présente des éléments ARN double brin. Étonnamment, lorsque la séquence poly (U/UC) est dans un contexte de génome entier, cette région forme une structure ARN double brin avec une séquence située en dehors du 3'UTR, suggérant une interaction ARN-ARN distale. Certaines études ont démontré que des structures ARN présentes aux extrémités 5’ et 3' du génome du VHC régulent à la fois la traduction et la réplication du VHC. Cela suggère qu'il y aurait une interaction entre les extrémités du génome qui permettrait de moduler ces deux processus. Dans ce contexte, nous avons démontré l'existence d'une interaction distale ARN-ARN, impliquant le domaine II du 5'UTR et la séquence codante de NS5B du génome du VHC. En outre, nous avons démontré que cette interaction joue un rôle dans la réplication de l'ARN viral. Parallèlement, nous avons étudié l'impact d'une molécule immuno-modulatrice sur la réplication du VHC. La fibrose hépatique est une manifestation majeure de l’infection par le VHC. Hors, il a été montré qu'une molécule immuno-modulatrice appelée thalidomide atténuait la fibrose chez les patients infectés par le VHC. Cependant, son impact sur la réplication virale était inconnu. Ainsi, nous avons étudié l'effet de cette molécule sur la réplication du VHC in vitro et nous avons démontré que la thalidomide active la réplication du virus en inhibant la voie de signalisation de NF-kB. Ces résultats soulignent l’importance de la voie de signalisation NF-kB dans le contrôle de la réplication du VHC, et sont à prendre en considération dans l’établissement d’un traitement contre la fibrose hépatique.
Resumo:
Cet article s'intéresse aux processus de clarification des rôles professionnels lors de l'intégration d'une infirmière praticienne spécialisée dans les équipes de première ligne au Québec.