968 resultados para Four-leg voltage sources
Resumo:
Les canaux potassiques voltage-dépendants forment des tétramères dont chaque sous-unité comporte six segments transmembranaires (S1 à S6). Le pore, formé des segments S5-S6 de chaque sous-unité, est entouré de quatre domaines responsables de la sensibilité au potentiel membranaire, les senseurs de voltage (VS; S1-S4). Lors d’une dépolarisation membranaire, le mouvement des résidus chargés situés dans le VS entraine un mouvement de charges détectable en électrophysiologie, le courant de « gating ». L’activation du VS conduit à l'ouverture du pore, qui se traduit par un changement de conformation en C-terminal du segment S6. Pour élucider les principes qui sous-tendent le couplage électromécanique entre ces deux domaines, nous avons étudié deux régions présumées responsables du couplage chez les canaux de type Shaker K+, soit la région carboxy-terminale du segment S6 et le lien peptidique reliant les segments transmembranaire S4-S5 (S4-5L). Avec la technique du « cut-open voltage clamp fluorometry » (COVCF), nous avons pu déterminer que l’interaction inter-sous-unitaire RELY, formée par des acides aminés situés sur le lien S4-5L et S6 de deux sous-unités voisines, est impliquée dans le développement de la composante lente observée lors du retour des charges de « gating » vers leur état de repos, le « OFF-gating ». Nous avons observé que l’introduction de mutations dans la région RELY module la force de ces interactions moléculaires et élimine l’asymétrie observée dans les courants de « gating » de type sauvage. D’ailleurs, nous démontrons que ce couplage inter-sous-unitaire est responsable de la stabilisation du pore dans l’état ouvert. Nous avons également identifié une interaction intra-sous-unitaire entre les résidus I384 situé sur le lien S4-5L et F484 sur le segment S6 d’une même sous-unité. La déstabilisation de cette interaction hydrophobique découple complètement le mouvement des senseurs de voltage et l'ouverture du pore. Sans cette interaction, l’énergie nécessaire pour activer les VS est moindre en raison de l’absence du poids mécanique appliqué par le pore. De plus, l’abolition du couplage électromécanique élimine également le « mode shift », soit le déplacement de la dépendance au voltage des charges de transfert (QV) vers des potentiels hyperpolarisants. Ceci indique que le poids mécanique du pore imposé au VS entraine le « mode shift », en modulant la conformation intrinsèque du VS par un processus allostérique.
Resumo:
La version intégrale de cette thèse est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université de Montréal (http://www.bib.umontreal.ca/MU).
Resumo:
Dans cet article issu d’une conférence prononcée dans le cadre du Colloque Leg@l.IT (www.legalit.ca), l’auteur offre un rapide survol des fonctionnalités offertes par les systèmes de dépôt électronique de la Cour fédérale et de la Cour canadienne de l’impôt afin de dégager les avantages et inconvénients de chacune des technologies proposées. Cet exercice s’inscrit dans une réflexion plus large sur les conséquences de la migration progressive de certaines juridictions vers le dépôt électronique. Si cette tentative de moderniser le processus judiciaire se veut bénéfique, il demeure qu’un changement technologique d’une telle importance n’est pas sans risques et sans incidences sur les us et coutumes de l’appareil judiciaire. L’auteur se questionne ainsi sur la pratique adoptée par certains tribunaux judiciaires de développer en silo des solutions d’informatisation du processus de gestion des dossiers de la Cour. L’absence de compatibilité des systèmes et le repli vers des modèles propriétaires sont causes de soucis. Qui plus est, en confiant le développement de ces systèmes à des firmes qui en conservent la propriété du code source, ils contribuent à une certaine privatisation du processus rendant la mise en réseau de l’appareil judiciaire d’autant plus difficile. Or, dans la mesure où les systèmes de différents tribunaux seront appelés à communiquer et échanger des données, l’adoption de solutions technologiques compatibles et ouvertes est de mise. Une autre problématique réside dans l’apparente incapacité du législateur de suivre l’évolution vers la virtualisation du processus judiciaire. Le changement technologique impose, dans certains cas, un changement conceptuel difficilement compatible avec la législation applicable. Ce constat implique la nécessité d’un questionnement plus profond sur la pertinence d’adapter le droit à la technologie ou encore la technologie au droit afin d’assurer une coexistence cohérente et effective de ces deux univers.
Resumo:
Le but de notre recherche est de répondre à la question suivante : Quelles sont les sources d’influence des pratiques d’emploi instaurées par les EMN originaires de pays européens dans leurs filiales québécoises? Comme les EMN constituent notre objet de recherche, nous avons, dans un premier temps, recensé les principales caractéristiques de celles-ci. Il faut noter que les EMN ont un portrait différent par rapport à celui des entreprises qui ne sont pas multinationales. Comme le Québec est l’endroit où notre recherche a eu lieu, nous avons aussi expliqué les caractéristiques socioéconomiques du marché québécois. Nous avons constaté que le marché québécois se distingue du reste du Canada par son hybridité résultant d’un mélange de caractéristiques libérales et coordonnées. Comme les EMN étudiées sont d’origine européenne, nous avons aussi expliqué les caractéristiques des pays européens à économie coordonnée et libérale. Il faut noter que les pays à économie coordonnée et à économie libérale ont de caractéristiques différentes, voire opposées. Dans un deuxième temps, nous avons recensé les études qui ont tenté de répondre à notre question de recherche. La littérature identifie quatre sources d’influence des pratiques d’emploi que les EMN instaurent dans leurs filiales étrangères : le pays d’accueil, le pays d’origine, les sources d’influence hybrides, et les sources d’influence globales. Les sources d’influence provenant des pays d’accueil déterminent les pratiques d’emploi des filiales étrangères en mettant en valeur l’isomorphisme, les principes calculateur et collaborateur, et la capacité des filiales à modifier les marchés dans lesquels elles opèrent. Les sources d’influence provenant des pays d’origine influencent les pratiques d’emploi en mettant en valeur l’isomorphisme culturel, l’effet du pays d’origine, et l’effet du pays de gestion. Les sources d’influence hybrides combinent les facteurs en provenance des pays d’accueil, des pays d’origine, et du marché global pour déterminer les pratiques d’emploi des filiales étrangères. Finalement, les sources d’influence globales mettent en valeur les pressions d’intégration au marché mondial pour expliquer la convergence des pratiques d’emploi des filiales étrangères vers un modèle universel anglo-saxon. Pour répondre à notre question de recherche, nous avons identifié les niveaux de coordination des pays d’origine comme variable indépendante, et les niveaux de coordination des pratiques d’emploi comme variable dépendante. Sept hypothèses avec leurs indicateurs respectifs ont tenu compte des relations entre nos variables indépendantes et dépendante. Nous avons préparé un questionnaire de recherche, et avons interviewé des membres du personnel de RH de dix EMN européennes ayant au moins une filiale au Québec. Les filiales faisant partie de notre échantillon appartiennent à des EMN originaires de divers pays européens, autant à marché libéral que coordonné. Nous avons décrit en détail les caractéristiques de chacune de ces EMN et de leurs filiales québécoises. Nous avons identifié des facteurs explicatifs (index de coordination de Hall et Gingerich, secteur d’activité, taille des filiales, et degré de globalisation des EMN) qui auraient pu aussi jouer un rôle dans la détermination et la nature des pratiques d’emploi des filiales. En matière de résultats, nous n’avons constaté de lien entre le type de marché du pays d’origine et le degré de coordination des pratiques d’emploi que pour les pratiques salariales; confirmant ainsi notre première hypothèse. Les pratiques de stabilité d’emploi, de formation, et de relations de travail ont un lien avec le secteur d’activité; soit le secteur de la production des bien, ou celui des services. Ainsi, les filiales dans le secteur de la production de biens font preuve de plus de coordination en matière de ces trois pratiques comparativement aux filiales dans le secteur des services. Finalement, les pratiques de développement de carrière et de partage d’information et consultation sont de nature coordonnée dans toutes les filiales, mais aucun facteur explicatif n’explique ce résultat. Compte tenu que le marché d’accueil québécois est commun à toutes les filiales, le Québec comme province d’accueil pourrait expliquer le fort degré de coordination de ces deux pratiques. Outre le marché d’accueil, le caractère multinational des EMN auxquelles ces filiales appartiennent pourrait aussi expliquer des résultats semblables en matière des pratiques d’emploi. Notre recherche comporte des forces et des faiblesses. Concernant les forces, notre méthode de recherche nous a permis d’obtenir des données de source directe, car nous avons questionné directement les gens concernées par les pratiques d’emploi. Ceci a pour effet d’assurer une certaine validité à notre recherche. Concernant nos faiblesses, la nature restreinte de notre échantillon ne nous permet pas de généraliser les résultats. Il faudrait réaliser d’autres recherches pour améliorer la fiabilité. De plus, les pays d’origine des filiales demeure ambigu pour certaines d’entre elles, car celles-ci ont changé de propriétaire plusieurs fois. D’autres ont au moins deux propriétaires originaires de pays distincts.
Resumo:
Low phosphorus (P) in acid sandy soils of the West African Sudano-Sahelian zone is a major limitation to crop growth. To compare treatment effects on total dry matter (TDM) of crops and plant available P (P-Bray and isotopically exchangeable P), field experiments were carried out for 2 years at four sites where annual rainfall ranged from 560 to 850 mm and topsoil pH varied between 4.2 and 5.6. Main treatments were: (i) crop residue (CR) mulch at 500 and 2000 kg ha^-1, (ii) eight different rates and sources of P and (iii) cereal/legume rotations including millet (Pennisetum glaucum L.), sorhum [Sorghum bicolor (L.) Moench], cowpea (Vigna unguiculata Walp.) and groundnut (Arachis hypogaea L.). For the two Sahelian sites with large CR-induced differences in TDM, mulching did not modify significantly the soils' buffering capacity for phosphate ions but led to large increases in the intensity factor (C_p) and quantity of directly available soil P (E_1min). In the wetter Sudanian zone lacking effects of CR mulching on TDM mirrored a decline of E_1min with CR. Broadcast application of soluble single superphosphate (SSP) at 13 kg P ha^-1 led to large increases in C_p and quantity of E_1min at all sites which translated in respective TDM increases. The high agronomic efficiency of SSP placement (4 kg P ha^-1) across sites could be explained by consistent increases in the quantity factor which confirms the power of the isotopic exchange method in explaining management effects on crop growth across the region.
Resumo:
Four perfluorocarbon tracer dispersion experiments were carried out in central London, United Kingdom in 2004. These experiments were supplementary to the dispersion of air pollution and penetration into the local environment (DAPPLE) campaign and consisted of ground level releases, roof level releases and mobile releases; the latter are believed to be the first such experiments to be undertaken. A detailed description of the experiments including release, sampling, analysis and wind observations is given. The characteristics of dispersion from the fixed and mobile sources are discussed and contrasted, in particular, the decay in concentration levels away from the source location and the additional variability that results from the non-uniformity of vehicle speed. Copyright © 2009 Royal Meteorological Society
Resumo:
Human consumption of long-chain n-3 polyunsaturated fatty acids (LC n-3 PUFA) is below recommendations, and enriching chicken meat (by incorporating LC n-3 PUFA into broiler diets) is a viable means of increasing consumption. Fish oil is the most common LC n-3 PUFA supplement used but is unsustainable and reduces the oxidative stability of the meat. The objective of this experiment was to compare fresh fish oil (FFO) with fish oil encapsulated (EFO) in a gelatin matrix (to maintain its oxidative stability) and algal biomass at a low (LAG, 11), medium (MAG, 22), or high (HAG, 33 g/kg of diet) level of inclusion. The C22:6n-3 contents of the FFO, EFO, and MAG diets were equal. A control (CON) diet using blended vegetable oil was also made. As-hatched 1-d-old Ross 308 broilers (144) were reared (21 d) on a common starter diet then allocated to treatment pens (4 pens per treatment, 6 birds per pen) and fed treatment diets for 21 d before being slaughtered. Breast and leg meat was analyzed (per pen) for fatty acids, and cooked samples (2 pens per treatment) were analyzed for volatile aldehydes. Concentrations (mg/100 g of meat) of C20:5n-3, C22:5n-3, and C22:6n-3 were (respectively) CON: 4, 15, 24; FFO: 31, 46, 129; EFO: 18, 27, 122; LAG: 9, 19, 111; MAG: 6, 16, 147; and HAG: 9, 14, 187 (SEM: 2.4, 3.6, 13.1) in breast meat and CON: 4, 12, 9; FFO: 58, 56, 132; EFO: 63, 49, 153; LAG: 13, 14, 101; MAG: 11, 15, 102; HAG: 37, 37, 203 (SEM: 7.8, 6.7, 14.4) in leg meat. Cooked EFO and HAG leg meat was more oxidized (5.2 mg of hexanal/kg of meat) than the other meats (mean 2.2 mg/kg, SEM 0.63). It is concluded that algal biomass is as effective as fish oil at enriching broiler diets with C22:6 LC n-3 PUFA, and at equal C22:6n-3 contents, there is no significant difference between these 2 supplements on the oxidative stability of the meat that is produced.
Resumo:
Scope: Fibers and prebiotics represent a useful dietary approach for modulating the human gut microbiome. Therefore, aim of the present study was to investigate the impact of four flours (wholegrain rye, wholegrain wheat, chickpeas and lentils 50:50, and barley milled grains), characterized by a naturally high content in dietary fibers, on the intestinal microbiota composition and metabolomic output. Methods and results: A validated three-stage continuous fermentative system simulating the human colon was used to resemble the complexity and diversity of the intestinal microbiota. Fluorescence in situ hybridization was used to evaluate the impact of the flours on the composition of the microbiota, while small-molecule metabolome was assessed by NMR analysis followed by multivariate pattern recognition techniques. HT29 cell-growth curve assay was used to evaluate the modulatory properties of the bacterial metabolites on the growth of intestinal epithelial cells. All the four flours showed positive modulations of the microbiota composition and metabolic activity. Furthermore, none of the flours influenced the growth-modulatory potential of the metabolites toward HT29 cells. Conclusion: Our findings support the utilization of the tested ingredients in the development of a variety of potentially prebiotic food products aimed at improving gastrointestinal health.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.
Resumo:
Clustering methods are increasingly being applied to residential smart meter data, providing a number of important opportunities for distribution network operators (DNOs) to manage and plan the low voltage networks. Clustering has a number of potential advantages for DNOs including, identifying suitable candidates for demand response and improving energy profile modelling. However, due to the high stochasticity and irregularity of household level demand, detailed analytics are required to define appropriate attributes to cluster. In this paper we present in-depth analysis of customer smart meter data to better understand peak demand and major sources of variability in their behaviour. We find four key time periods in which the data should be analysed and use this to form relevant attributes for our clustering. We present a finite mixture model based clustering where we discover 10 distinct behaviour groups describing customers based on their demand and their variability. Finally, using an existing bootstrapping technique we show that the clustering is reliable. To the authors knowledge this is the first time in the power systems literature that the sample robustness of the clustering has been tested.
Resumo:
This paper assesses the impact of the location and configuration of Battery Energy Storage Systems (BESS) on Low-Voltage (LV) feeders. BESS are now being deployed on LV networks by Distribution Network Operators (DNOs) as an alternative to conventional reinforcement (e.g. upgrading cables and transformers) in response to increased electricity demand from new technologies such as electric vehicles. By storing energy during periods of low demand and then releasing that energy at times of high demand, the peak demand of a given LV substation on the grid can be reduced therefore mitigating or at least delaying the need for replacement and upgrade. However, existing research into this application of BESS tends to evaluate the aggregated impact of such systems at the substation level and does not systematically consider the impact of the location and configuration of BESS on the voltage profiles, losses and utilisation within a given feeder. In this paper, four configurations of BESS are considered: single-phase, unlinked three-phase, linked three-phase without storage for phase-balancing only, and linked three-phase with storage. These four configurations are then assessed based on models of two real LV networks. In each case, the impact of the BESS is systematically evaluated at every node in the LV network using Matlab linked with OpenDSS. The location and configuration of a BESS is shown to be critical when seeking the best overall network impact or when considering specific impacts on voltage, losses, or utilisation separately. Furthermore, the paper also demonstrates that phase-balancing without energy storage can provide much of the gains on unbalanced networks compared to systems with energy storage.
Resumo:
We present a map of the spiral structure of the Galaxy, as traced by molecular carbon monosulphide (CS) emission associated with IRAS sources which are believed to be compact H II regions. The CS line velocities are used to determine the kinematic distances of the sources in order to investigate their distribution in the galactic plane. This allows us to use 870 objects to trace the arms, a number larger than that of previous studies based on classical H II regions. The distance ambiguity of the kinematic distances, when it exists, is solved by different procedures, including the latitude distribution and an analysis of the longitude-velocity diagram. The study of the spiral structure is complemented with other tracers: open clusters, Cepheids, methanol masers and H II regions. The well-defined spiral arms are seen to be confined inside the corotation radius, as is often the case in spiral galaxies. We identify a square-shaped sub-structure in the CS map with that predicted by stellar orbits at the 4:1 resonance (four epicycle oscillations in one turn around the galactic centre). The sub-structure is found at the expected radius, based on the known pattern rotation speed and epicycle frequency curve. An inner arm presents an end with strong inwards curvature and intense star formation that we tentatively associate with the region where this arm surrounds the extremity of the bar, as seen in many barred galaxies. Finally, a new arm with concave curvature is found in the Sagitta to Cepheus region of the sky. The observed arms are interpreted in terms of perturbations similar to grooves in the gravitational potential of the disc, produced by crowding of stellar orbits.
Resumo:
The purpose of this study was to evaluate the influence of different light sources and photo-activation methods on degree of conversion (DC%) and polymerization shrinkage (PS) of a nanocomposite resin (Filtek (TM) Supreme XT, 3M/ESPE). Two light-curing units (LCUs), one halogen-lamp (QTH) and one light-emitting-diode (LED), and two different photo-activation methods (continuous and gradual) were investigated in this study. The specimens were divided in four groups: group 1-power density (PD) of 570 mW/cm(2) for 20 s (QTH); group 2-PD 0 at 570 mW/cm(2) for 10 s + 10 s at 570 mW/cm(2) (QTH); group 3-PD 860 mW/cm(2) for 20 s (LED), and group 4-PD 125 mW/cm(2) for 10 s + 10 s at 860 mW/cm(2) (LED). A testing machine EMIC with rectangular steel bases (6 x 1 x 2 mm) was used to record the polymerization shrinkage forces (MPa) for a period that started with the photo-activation and ended after two minutes of measurement. For each group, ten repetitions (n = 40) were performed. For DC% measurements, five specimens (n = 20) for each group were made in a metallic mold (2 mm thickness and 4 mm diameter, ISO 4049) and them pulverized, pressed with bromide potassium (KBr) and analyzed with FT-IR spectroscopy. The data of PS were analyzed by Analysis of Variance (ANOVA) with Welch`s correction and Tamhane`s test. The PS means (MPa) were: 0.60 (G1); 0.47 (G2); 0.52 (G3) and 0.45 (G4), showing significant differences between two photo-activation methods, regardless of the light source used. The continuous method provided the highest values for PS. The data of DC% were analyzed by Analysis of Variance (ANOVA) and shows significant differences for QTH LCUs, regardless of the photo-activation method used. The QTH provided the lowest values for DC%. The gradual method provides lower polymerization contraction, either with halogen lamp or LED. Degree of conversion (%) for continuous or gradual photo-activation method was influenced by the LCUs. Thus, the presented results suggest that gradual method photo-activation with LED LCU would suffice to ensure adequate degree of conversion and minimum polymerization shrinkage.
Resumo:
We present the first results of a study investigating the processes that control concentrations and sources of Pb and particulate matter in the atmosphere of Sao Paulo City Brazil Aerosols were collected with high temporal resolution (3 hours) during a four-day period in July 2005 The highest Pb concentrations measured coincided with large fireworks during celebration events and associated to high traffic occurrence Our high-resolution data highlights the impact that a singular transient event can have on air quality even in a megacity Under meteorological conditions non-conducive to pollutant dispersion Pb and particulate matter concentrations accumulated during the night leading to the highest concentrations in aerosols collected early in the morning of the following day The stable isotopes of Pb suggest that emissions from traffic remain an Important source of Pb in Sao Paulo City due to the large traffic fleet despite low Pb concentrations in fuels (C) 2010 Elsevier BV All rights reserved