963 resultados para Link variable method
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
An oscillating overvoltage has become a common phenomenon at the motor terminal in inverter-fed variable-speed drives. The problem has emerged since modern insulated gate bipolar transistors have become the standard choice as the power switch component in lowvoltage frequency converter drives. Theovervoltage phenomenon is a consequence of the pulse shape of inverter output voltage and impedance mismatches between the inverter, motor cable, and motor. The overvoltages are harmful to the electric motor, and may cause, for instance, insulation failure in the motor. Several methods have been developed to mitigate the problem. However, most of them are based on filtering with lossy passive components, the drawbacks of which are typically their cost and size. In this doctoral dissertation, application of a new active du/dt filtering method based on a low-loss LC circuit and active control to eliminate the motor overvoltages is discussed. The main benefits of the method are the controllability of the output voltage du/dt within certain limits, considerably smaller inductances in the filter circuit resulting in a smaller physical component size, and excellent filtering performance when compared with typical traditional du/dt filtering solutions. Moreover, no additional components are required, since the active control of the filter circuit takes place in the process of the upper-level PWM modulation using the same power switches as the inverter output stage. Further, the active du/dt method will benefit from the development of semiconductor power switch modules, as new technologies and materials emerge, because the method requires additional switching in the output stage of the inverter and generation of narrow voltage pulses. Since additional switching is required in the output stage, additional losses are generated in the inverter as a result of the application of the method. Considerations on the application of the active du/dt filtering method in electric drives are presented together with experimental data in order to verify the potential of the method.
Resumo:
This thesis considers modeling and analysis of noise and interconnects in onchip communication. Besides transistor count and speed, the capabilities of a modern design are often limited by on-chip communication links. These links typically consist of multiple interconnects that run parallel to each other for long distances between functional or memory blocks. Due to the scaling of technology, the interconnects have considerable electrical parasitics that affect their performance, power dissipation and signal integrity. Furthermore, because of electromagnetic coupling, the interconnects in the link need to be considered as an interacting group instead of as isolated signal paths. There is a need for accurate and computationally effective models in the early stages of the chip design process to assess or optimize issues affecting these interconnects. For this purpose, a set of analytical models is developed for on-chip data links in this thesis. First, a model is proposed for modeling crosstalk and intersymbol interference. The model takes into account the effects of inductance, initial states and bit sequences. Intersymbol interference is shown to affect crosstalk voltage and propagation delay depending on bus throughput and the amount of inductance. Next, a model is proposed for the switching current of a coupled bus. The model is combined with an existing model to evaluate power supply noise. The model is then applied to reduce both functional crosstalk and power supply noise caused by a bus as a trade-off with time. The proposed reduction method is shown to be effective in reducing long-range crosstalk noise. The effects of process variation on encoded signaling are then modeled. In encoded signaling, the input signals to a bus are encoded using additional signaling circuitry. The proposed model includes variation in both the signaling circuitry and in the wires to calculate the total delay variation of a bus. The model is applied to study level-encoded dual-rail and 1-of-4 signaling. In addition to regular voltage-mode and encoded voltage-mode signaling, current-mode signaling is a promising technique for global communication. A model for energy dissipation in RLC current-mode signaling is proposed in the thesis. The energy is derived separately for the driver, wire and receiver termination.
Resumo:
The electrocardiography (ECG) QT interval is influenced by fluctuations in heart rate (HR) what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds). Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle) were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc) obtained using the diverse formulae were significantly different (ρ<0.05), while those derived according to the equation QTcV = QT + 0.087(1- RR) were the most consistent (linear regression). QTcV values were strongly correlated (r=0.83) with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.
Resumo:
The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.
Resumo:
This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
Cloning of the T-cell receptor genes is a critical step when generating T-cell receptor transgenic mice. Because T-cell receptor molecules are clonotypical, isolation of their genes requires reverse transcriptase-assisted PCR using primers specific for each different Valpha or Vß genes or by the screening of cDNA libraries generated from RNA obtained from each individual T-cell clone. Although feasible, these approaches are laborious and costly. The aim of the present study was to test the application of the non-palindromic adaptor-PCR method as an alternative to isolate the genes encoding the T-cell receptor of an antigen-specific T-cell hybridoma. For this purpose, we established hybridomas specific for trans-sialidase, an immunodominant Trypanosoma cruzi antigen. These T-cell hybridomas were characterized with regard to their ability to secrete interferon-gamma, IL-4, and IL-10 after stimulation with the antigen. A CD3+, CD4+, CD8- interferon-gamma-producing hybridoma was selected for the identification of the variable regions of the T-cell receptor by the non-palindromic adaptor-PCR method. Using this methodology, we were able to rapidly and efficiently determine the variable regions of both T-cell receptor chains. The results obtained by the non-palindromic adaptor-PCR method were confirmed by the isolation and sequencing of the complete cDNA genes and by the recognition with a specific antibody against the T-cell receptor variable ß chain. We conclude that the non-palindromic adaptor-PCR method can be a valuable tool for the identification of the T-cell receptor transcripts of T-cell hybridomas and may facilitate the generation of T-cell receptor transgenic mice.
Resumo:
Intercellular adhesion molecule-1 (ICAM-1) is an important factor in the progression of inflammatory responses in vivo. To develop a new anti-inflammatory drug to block the biological activity of ICAM-1, we produced a monoclonal antibody (Ka=4.19×10−8 M) against human ICAM-1. The anti-ICAM-1 single-chain variable antibody fragment (scFv) was expressed at a high level as inclusion bodies in Escherichia coli. We refolded the scFv (Ka=2.35×10−7 M) by ion-exchange chromatography, dialysis, and dilution. The results showed that column chromatography refolding by high-performance Q Sepharose had remarkable advantages over conventional dilution and dialysis methods. Furthermore, the anti-ICAM-1 scFv yield of about 60 mg/L was higher with this method. The purity of the final product was greater than 90%, as shown by denaturing gel electrophoresis. Enzyme-linked immunosorbent assay, cell culture, and animal experiments were used to assess the immunological properties and biological activities of the renatured scFv.
Resumo:
Food industry has been developing products to meet the demands of increasing number of consumers who are concerned with their health and who seek food products that satisfy their needs. Therefore, the development of processed foods that contain functional components has become important for this industry. Microencapsulation can be used to reduce the effects of processing on functional components and preserve their bioactivity. The present study investigated the production of lipid microparticles containing phytosterols by spray chilling. The matrices comprised mixtures of stearic acid and hydrogenated vegetable fat, and the ratio of the matrix components to phytosterols was defined by an experimental design using the mean diameters of the microparticles as the response variable. The melting point of the matrices ranged from 44.5 and 53.4 ºC. The process yield was melting point dependent; the particles that exhibited lower melting point had greater losses than those with higher melting point. The microparticles' mean diameters ranged from 13.8 and 32.2 µm and were influenced by the amount of phytosterols and stearic acid. The microparticles exhibited spherical shape and typical polydispersity of atomized products. From a technological and practical (handling, yield, and agglomeration) points of view, lipid microparticles with higher melting point proved promising as phytosterol carriers.
Resumo:
Preventive maintenance of frequency converters has been based on pre-planned replace-ment of wearing or ageing components. Exchange intervals follow component life-time expectations which are based on empirical knowledge or schedules defined by manufac-turer. However, the lifetime of a component can vary significantly, because drives are used in very different operating environments and applications. The main objective of the research was to provide information on methods, i.e. how in-verter's operating condition can be measured reliably under field conditions. At first, the research focused on critical components such as current transducers, IGBTs and DC link capacitor bank, because these aging have already been identified. Of these, the DC link capacitor measurement method was selected for closer examination. With this method, the total capacitance and its total series resistance can be measured. The suitability of the measuring procedure was estimated on the basis of practical measurements. The research was made by using so called triangulation method, including a literature review, simulations and practical measurements. Based on the results, the new measu-rement method seems suitable with some reservations to practical measurements. How-ever, the measuring method should be further developed in order to improve its reliability.
Resumo:
L'hétérogénéité de réponses dans un groupe de patients soumis à un même régime thérapeutique doit être réduite au cours d'un traitement ou d'un essai clinique. Deux approches sont habituellement utilisées pour atteindre cet objectif. L'une vise essentiellement à construire une observance active. Cette approche se veut interactive et fondée sur l'échange ``médecin-patient '', ``pharmacien-patient'' ou ``vétérinaire-éleveurs''. L'autre plutôt passive et basée sur les caractéristiques du médicament, vise à contrôler en amont cette irrégularité. L'objectif principal de cette thèse était de développer de nouvelles stratégies d'évaluation et de contrôle de l'impact de l'irrégularité de la prise du médicament sur l'issue thérapeutique. Plus spécifiquement, le premier volet de cette recherche consistait à proposer des algorithmes mathématiques permettant d'estimer efficacement l'effet des médicaments dans un contexte de variabilité interindividuelle de profils pharmacocinétiques (PK). Cette nouvelle méthode est fondée sur l'utilisation concommitante de données \textit{in vitro} et \textit{in vivo}. Il s'agit de quantifier l'efficience ( c-à-dire efficacité plus fluctuation de concentrations \textit{in vivo}) de chaque profil PK en incorporant dans les modèles actuels d'estimation de l'efficacité \textit{in vivo}, la fonction qui relie la concentration du médicament de façon \textit{in vitro} à l'effet pharmacodynamique. Comparativement aux approches traditionnelles, cette combinaison de fonction capte de manière explicite la fluctuation des concentrations plasmatiques \textit{in vivo} due à la fonction dynamique de prise médicamenteuse. De plus, elle soulève, à travers quelques exemples, des questions sur la pertinence de l'utilisation des indices statiques traditionnels ($C_{max}$, $AUC$, etc.) d'efficacité comme outil de contrôle de l'antibiorésistance. Le deuxième volet de ce travail de doctorat était d'estimer les meilleurs temps d'échantillonnage sanguin dans une thérapie collective initiée chez les porcs. Pour ce faire, nous avons développé un modèle du comportement alimentaire collectif qui a été par la suite couplé à un modèle classique PK. À l'aide de ce modèle combiné, il a été possible de générer un profil PK typique à chaque stratégie alimentaire particulière. Les données ainsi générées, ont été utilisées pour estimer les temps d'échantillonnage appropriés afin de réduire les incertitudes dues à l'irrégularité de la prise médicamenteuse dans l'estimation des paramètres PK et PD . Parmi les algorithmes proposés à cet effet, la méthode des médianes semble donner des temps d'échantillonnage convenables à la fois pour l'employé et pour les animaux. Enfin, le dernier volet du projet de recherche a consisté à proposer une approche rationnelle de caractérisation et de classification des médicaments selon leur capacité à tolérer des oublis sporadiques. Méthodologiquement, nous avons, à travers une analyse globale de sensibilité, quantifié la corrélation entre les paramètres PK/PD d'un médicament et l'effet d'irrégularité de la prise médicamenteuse. Cette approche a consisté à évaluer de façon concomitante l'influence de tous les paramètres PK/PD et à prendre en compte, par la même occasion, les relations complexes pouvant exister entre ces différents paramètres. Cette étude a été réalisée pour les inhibiteurs calciques qui sont des antihypertenseurs agissant selon un modèle indirect d'effet. En prenant en compte les valeurs des corrélations ainsi calculées, nous avons estimé et proposé un indice comparatif propre à chaque médicament. Cet indice est apte à caractériser et à classer les médicaments agissant par un même mécanisme pharmacodynamique en terme d'indulgence à des oublis de prises médicamenteuses. Il a été appliqué à quatre inhibiteurs calciques. Les résultats obtenus étaient en accord avec les données expérimentales, traduisant ainsi la pertinence et la robustesse de cette nouvelle approche. Les stratégies développées dans ce projet de doctorat sont essentiellement fondées sur l'analyse des relations complexes entre l'histoire de la prise médicamenteuse, la pharmacocinétique et la pharmacodynamique. De cette analyse, elles sont capables d'évaluer et de contrôler l'impact de l'irrégularité de la prise médicamenteuse avec une précision acceptable. De façon générale, les algorithmes qui sous-tendent ces démarches constitueront sans aucun doute, des outils efficients dans le suivi et le traitement des patients. En outre, ils contribueront à contrôler les effets néfastes de la non-observance au traitement par la mise au point de médicaments indulgents aux oublis
Resumo:
Le modèle GARCH à changement de régimes est le fondement de cette thèse. Ce modèle offre de riches dynamiques pour modéliser les données financières en combinant une structure GARCH avec des paramètres qui varient dans le temps. Cette flexibilité donne malheureusement lieu à un problème de path dependence, qui a empêché l'estimation du modèle par le maximum de vraisemblance depuis son introduction, il y a déjà près de 20 ans. La première moitié de cette thèse procure une solution à ce problème en développant deux méthodologies permettant de calculer l'estimateur du maximum de vraisemblance du modèle GARCH à changement de régimes. La première technique d'estimation proposée est basée sur l'algorithme Monte Carlo EM et sur l'échantillonnage préférentiel, tandis que la deuxième consiste en la généralisation des approximations du modèle introduites dans les deux dernières décennies, connues sous le nom de collapsing procedures. Cette généralisation permet d'établir un lien méthodologique entre ces approximations et le filtre particulaire. La découverte de cette relation est importante, car elle permet de justifier la validité de l'approche dite par collapsing pour estimer le modèle GARCH à changement de régimes. La deuxième moitié de cette thèse tire sa motivation de la crise financière de la fin des années 2000 pendant laquelle une mauvaise évaluation des risques au sein de plusieurs compagnies financières a entraîné de nombreux échecs institutionnels. À l'aide d'un large éventail de 78 modèles économétriques, dont plusieurs généralisations du modèle GARCH à changement de régimes, il est démontré que le risque de modèle joue un rôle très important dans l'évaluation et la gestion du risque d'investissement à long terme dans le cadre des fonds distincts. Bien que la littérature financière a dévoué beaucoup de recherche pour faire progresser les modèles économétriques dans le but d'améliorer la tarification et la couverture des produits financiers, les approches permettant de mesurer l'efficacité d'une stratégie de couverture dynamique ont peu évolué. Cette thèse offre une contribution méthodologique dans ce domaine en proposant un cadre statistique, basé sur la régression, permettant de mieux mesurer cette efficacité.
Resumo:
Essai doctoral présenté à la Faculté des arts et des sciences en vue de l'obtention du grade de Doctorat (D.Psy) en psychologie option psychologie clinique.
Resumo:
Le but de notre recherche est de répondre à la question suivante : Quelles sont les sources d’influence des pratiques d’emploi instaurées par les EMN originaires de pays européens dans leurs filiales québécoises? Comme les EMN constituent notre objet de recherche, nous avons, dans un premier temps, recensé les principales caractéristiques de celles-ci. Il faut noter que les EMN ont un portrait différent par rapport à celui des entreprises qui ne sont pas multinationales. Comme le Québec est l’endroit où notre recherche a eu lieu, nous avons aussi expliqué les caractéristiques socioéconomiques du marché québécois. Nous avons constaté que le marché québécois se distingue du reste du Canada par son hybridité résultant d’un mélange de caractéristiques libérales et coordonnées. Comme les EMN étudiées sont d’origine européenne, nous avons aussi expliqué les caractéristiques des pays européens à économie coordonnée et libérale. Il faut noter que les pays à économie coordonnée et à économie libérale ont de caractéristiques différentes, voire opposées. Dans un deuxième temps, nous avons recensé les études qui ont tenté de répondre à notre question de recherche. La littérature identifie quatre sources d’influence des pratiques d’emploi que les EMN instaurent dans leurs filiales étrangères : le pays d’accueil, le pays d’origine, les sources d’influence hybrides, et les sources d’influence globales. Les sources d’influence provenant des pays d’accueil déterminent les pratiques d’emploi des filiales étrangères en mettant en valeur l’isomorphisme, les principes calculateur et collaborateur, et la capacité des filiales à modifier les marchés dans lesquels elles opèrent. Les sources d’influence provenant des pays d’origine influencent les pratiques d’emploi en mettant en valeur l’isomorphisme culturel, l’effet du pays d’origine, et l’effet du pays de gestion. Les sources d’influence hybrides combinent les facteurs en provenance des pays d’accueil, des pays d’origine, et du marché global pour déterminer les pratiques d’emploi des filiales étrangères. Finalement, les sources d’influence globales mettent en valeur les pressions d’intégration au marché mondial pour expliquer la convergence des pratiques d’emploi des filiales étrangères vers un modèle universel anglo-saxon. Pour répondre à notre question de recherche, nous avons identifié les niveaux de coordination des pays d’origine comme variable indépendante, et les niveaux de coordination des pratiques d’emploi comme variable dépendante. Sept hypothèses avec leurs indicateurs respectifs ont tenu compte des relations entre nos variables indépendantes et dépendante. Nous avons préparé un questionnaire de recherche, et avons interviewé des membres du personnel de RH de dix EMN européennes ayant au moins une filiale au Québec. Les filiales faisant partie de notre échantillon appartiennent à des EMN originaires de divers pays européens, autant à marché libéral que coordonné. Nous avons décrit en détail les caractéristiques de chacune de ces EMN et de leurs filiales québécoises. Nous avons identifié des facteurs explicatifs (index de coordination de Hall et Gingerich, secteur d’activité, taille des filiales, et degré de globalisation des EMN) qui auraient pu aussi jouer un rôle dans la détermination et la nature des pratiques d’emploi des filiales. En matière de résultats, nous n’avons constaté de lien entre le type de marché du pays d’origine et le degré de coordination des pratiques d’emploi que pour les pratiques salariales; confirmant ainsi notre première hypothèse. Les pratiques de stabilité d’emploi, de formation, et de relations de travail ont un lien avec le secteur d’activité; soit le secteur de la production des bien, ou celui des services. Ainsi, les filiales dans le secteur de la production de biens font preuve de plus de coordination en matière de ces trois pratiques comparativement aux filiales dans le secteur des services. Finalement, les pratiques de développement de carrière et de partage d’information et consultation sont de nature coordonnée dans toutes les filiales, mais aucun facteur explicatif n’explique ce résultat. Compte tenu que le marché d’accueil québécois est commun à toutes les filiales, le Québec comme province d’accueil pourrait expliquer le fort degré de coordination de ces deux pratiques. Outre le marché d’accueil, le caractère multinational des EMN auxquelles ces filiales appartiennent pourrait aussi expliquer des résultats semblables en matière des pratiques d’emploi. Notre recherche comporte des forces et des faiblesses. Concernant les forces, notre méthode de recherche nous a permis d’obtenir des données de source directe, car nous avons questionné directement les gens concernées par les pratiques d’emploi. Ceci a pour effet d’assurer une certaine validité à notre recherche. Concernant nos faiblesses, la nature restreinte de notre échantillon ne nous permet pas de généraliser les résultats. Il faudrait réaliser d’autres recherches pour améliorer la fiabilité. De plus, les pays d’origine des filiales demeure ambigu pour certaines d’entre elles, car celles-ci ont changé de propriétaire plusieurs fois. D’autres ont au moins deux propriétaires originaires de pays distincts.