890 resultados para Reactive power flow


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’atteinte de la fonction endothéliale représente une phase précoce de l’athérosclérose, un stade où les patients sont généralement asymptomatiques. Il existe donc un intérêt certain à détecter la dysfonction endothéliale. Nous avons développé une technique de mesure des variations de flot artériel au niveau des membres supérieurs, basée sur la spectroscopie proche infrarouge (NIRS). Cette approche permettrait d’étudier le niveau d’atteinte vasculaire et probablement de quantifier le degré de dysfonction endothéliale périphérique lors d’une hyperémie réactive. L'expérience a été exécutée sur deux cohortes de 13 et de 15 patients et a été comparée à la pléthysmographie par jauge de contrainte (SGP) qui est considérée comme une méthode de référence. Par la suite, nous avons caractérisé la réponse endothéliale par modélisation de la courbe hyperémique du flot artériel. Des études préliminaires avaient démontré que la réponse hyperémique adoptait majoritairement une forme bi-modale. Nous avons tenté de séparer les composantes endothéliales-dépendantes et endothéliales-indépendantes de l’hyperémie. La quantification des deux composantes de la réaction hyperémique permet de calculer un indice de la ‘santé’ du système endothélial local. Cet indice est nommé le ηfactor. Les résultats montrent une forte corrélation des mesures de flots entre la technique développée et la méthode de référence (r=0.91). Nous avons conclu que NIRS est une approche précise pour la mesure non-invasive du flot artériel. Nous avons obtenu une bonne répétabilité (ICC = 0.9313) pour le ηfactor indiquant sa robustesse. Cependant des études supplémentaires sont nécessaires pour valider la valeur de diagnostic du facteur défini. Mots clés: hyperémie réactive, réponse myogénique, oxyde nitrique, athérosclérose, spectroscopie proche infrarouge

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Sur un ton humoristique, les auteurs avouent être des voleurs… d’idées. À quoi riment les lois sur le droit d’auteur demandent-ils ? À l’origine, ce ne sont pas les droits des créateurs que le souverain ou l’État voulait protéger, mais bien les privilèges des éditeurs. Et d’où vient qu’on ait ainsi accordé à ces derniers le droit exclusif de publier ? C’est que dès l’invention de l’imprimerie, les hommes de pouvoir ont bien vu la menace que représentait pour eux la dissémination des idées : le calcul qu’ils ont fait a profité aux imprimeurs. Le phénomène n’est pas étranger à l’existence des permis de radiodiffusion / télévision existant de nos jours dans nos États démocratiques ; et l’histoire se répète comme on l’observe aujourd’hui quant à la régulation du réseau Internet. Quand les éditeurs se rendirent compte qu’ils ne pouvaient plus avoir la main haute sur tout ce qui se publiait, ils ont pris prétexte du droit des créateurs pour protéger leurs propres intérêts. Ni l’éthique ni l’esthétique ne motivaient les éditeurs , mais bien leurs seuls intérêts commerciaux, légitimes au demeurant. Deux factions s’opposent aujourd’hui quant à la question du droit des auteurs à l’ère numérique. La vieille garde se bat pour préserver à peu de choses près le statu quo tandis que ses vis-à-vis proclament la mort du droit d’auteur tel qu’il a existé. Et quel modèle nouveau préconisent ces derniers ? En fait, ils ne s’opposent pas à toute forme de protection pour ceux qui traditionnellement en ont bénéficié, mais songent à des mécanismes nouveaux …, de sorte que la vieille garde n’a pas à s’en faire outre mesure. Le fond du problème est ailleurs soutiennent MM. Benyekhlef et Tresvant : même si les avocats plaideront que ce ne sont pas les idées, mais bien la forme particulière qu’un créateur a choisie pour les exprimer qu’on protège par les lois sur le droit d’auteur, cela ne change rien. Dès qu’une idée est exprimée et fixée d’une certaine manière, il devient plus difficile de l’exprimer à nouveau puisqu’une partie du champ virtuel qu’elle pouvait occuper est déjà conquise, à bon droit selon le droit actuel. Il faut en conclure que le droit d’auteur nouveau, comme le droit d’auteur traditionnel, est une entrave à la libre circulation des idées."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thèse réalisée en cotutelle avec Michèle Prévost (Ph.D), Professeure titulaire au département des génies civil, géologique et des mines de l'École Polytechnique de Montréal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High aspect ratio polymeric micro-patterns are ubiquitous in many fields ranging from sensors, actuators, optics, fluidics and medical. Second generation PDMS molds are replicated against first generation silicon molds created by deep reactive ion etching. In order to ensure successful demolding, the silicon molds are coated with a thin layer of C[subscript 4]F[subscript 8] plasma polymer to reduce the adhesion force. Peel force and demolding status are used to determine if delamination is successful. Response surface method is employed to provide insights on how changes in coil power, passivating time and gas flow conditions affect plasma polymerization of C[subscript 4]F[subscript 8].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: La disminución de flujo en los vasos coronarios sin presencia de oclusión, es conocido como fenómeno de no reflujo, se observa después de la reperfusión, su presentación oscila entre el 5% y el 50% dependiendo de la población y de los criterios diagnósticos, dicho suceso es de mal pronóstico, aumenta el riesgo de morir en los primeros 30 días posterior a la angioplastia (RR 2,1 p 0,038), y se relaciona con falla cardiaca y arritmias, por eso al identificar los factores a los cuales se asocia, se podrán implementar terapias preventivas. Metodología: Estudio de casos y controles pareado por médico que valoró el evento, para garantizar que no existieron variaciones inter observador, con una razón 1:4 (18:72), realizado para identificar factores asociados a la presencia de no reflujo en pacientes llevados a angioplastia, entre noviembre de 2010 y mayo de 2014, en la Clínica San Rafael de Bogotá, D.C. Resultados: La frecuencia del no reflujo fue del 2.89%. El Infarto Agudo de Miocardio con elevación del ST (IAMCEST) fue la única variable que mostró una asociación estadísticamente significativa con este suceso, valor de p 0,002, OR 8,7, IC 95% (2,0 – 36,7). Discusión: El fenómeno de no reflujo en esta población se comportó de manera similar a lo descrito en la literatura, siendo el IAMCEST un factor fuertemente asociado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kinetics of uptake of gaseous N2O5 on submicron aerosols containing NaCl and natural sea salt have been investigated in a flow reactor as a function of relative humidity (RH) in the range 30-80% at 295±2K and a total pressure of 1bar. The measured uptake coefficients, γ, were larger on the aerosols containing sea salt compared to those of pure NaCl, and in both cases increased with increasing RH. These observations are explained in terms of the variation in the size of the salt droplets, which leads to a limitation in the uptake rate into small particles. After correction for this effect the uptake coefficients are independent of relative humidity, and agree with those measured previously on larger droplets. A value of γ=0.025 is recommended for the reactive uptake coefficient for N2O5 on deliquesced sea salt droplets at 298K and RH>40%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A finite element numerical study has been carried out on the isothermal flow of power law fluids in lid-driven cavities with axial throughflow. The effects of the tangential flow Reynolds number (Re-U), axial flow Reynolds number (Re-W), cavity aspect ratio and shear thinning property of the fluids on tangential and axial velocity distributions and the frictional pressure drop are studied. Where comparison is possible, very good agreement is found between current numerical results and published asymptotic and numerical results. For shear thinning materials in long thin cavities in the tangential flow dominated flow regime, the numerical results show that the frictional pressure drop lies between two extreme conditions, namely the results for duct flow and analytical results from lubrication theory. For shear thinning materials in a lid-driven cavity, the interaction between the tangential flow and axial flow is very complex because the flow is dependent on the flow Reynolds numbers and the ratio of the average axial velocity and the lid velocity. For both Newtonian and shear thinning fluids, the axial velocity peak is shifted and the frictional pressure drop is increased with increasing tangential flow Reynolds number. The results are highly relevant to industrial devices such as screw extruders and scraped surface heat exchangers. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a computer program requires legitimate access to confidential data, the question arises whether such a program may illegally reveal sensitive information. This paper proposes a policy model to specify what information flow is permitted in a computational system. The security definition, which is based on a general notion of information lattices, allows various representations of information to be used in the enforcement of secure information flow in deterministic or nondeterministic systems. A flexible semantics-based analysis technique is presented, which uses the input-output relational model induced by an attacker's observational power, to compute the information released by the computational system. An illustrative attacker model demonstrates the use of the technique to develop a termination-sensitive analysis. The technique allows the development of various information flow analyses, parametrised by the attacker's observational power, which can be used to enforce what declassification policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors discuss an implementation of an object oriented (OO) fault simulator and its use within an adaptive fault diagnostic system. The simulator models the flow of faults around a power network, reporting switchgear indications and protection messages that would be expected in a real fault scenario. The simulator has been used to train an adaptive fault diagnostic system; results and implications are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been known for decades that the metabolic rate of animals scales with body mass with an exponent that is almost always <1, >2/3, and often very close to 3/4. The 3/4 exponent emerges naturally from two models of resource distribution networks, radial explosion and hierarchically branched, which incorporate a minimum of specific details. Both models show that the exponent is 2/3 if velocity of flow remains constant, but can attain a maximum value of 3/4 if velocity scales with its maximum exponent, 1/12. Quarterpower scaling can arise even when there is no underlying fractality. The canonical “fourth dimension” in biological scaling relations can result from matching the velocity of flow through the network to the linear dimension of the terminal “service volume” where resources are consumed. These models have broad applicability for the optimal design of biological and engineered systems where energy, materials, or information are distributed from a single source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge is recognised as an important source of competitive advantage and hence there has been increasing academic and practitioner interest in understanding and isolating the factors that contribute to effective knowledge transfer between supply chain actors. The literature identifies power as a salient contributor to the effective operation of a supply chain partnership. However, there is a paucity of empirical research examining how power among actors influences knowledge acquisition and in turn the performance of supply chain partners. The aim of this research is to address this gap by examining the relationship between power, knowledge acquisition and supply chain performance among the supply chain partners of a focal Chinese steel manufacturer. A structured survey was used to collect the necessary data. Two conceptually independent variables – ‘availability of alternatives’ and ‘restraint in the use of power’ – were used to assess actual and realised power, respectively. Controlling for contingencies, we found that the flow of knowledge increased when supply chain actors had limited alternatives and when the more powerful actor exercised restraint in the use of power. Moreover, we found a positive relationship between knowledge acquisition and supply chain performance. This paper enriches the literature by empirically extending our understanding of how power affects knowledge acquisition and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the integration of vertical axis wind turbines in the built environment is a promising alternative to horizontal axis wind turbines, a 2D computational investigation of an augmented wind turbine is proposed and analysed. In the initial CFD analysis, three parameters are carefully investigated: mesh resolution; turbulence model; and time step size. It appears that the mesh resolution and the turbulence model affect result accuracy; while the time step size examined, for the unsteady nature of the flow, has small impact on the numerical results. In the CFD validation of the open rotor with secondary data, the numerical results are in good agreement in terms of shape. It is, however, observed a discrepancy factor of 2 between numerical and experimental data. Successively, the introduction of an omnidirectional stator around the wind turbine increases the power and torque coefficients by around 30–35% when compared to the open case; but attention needs to be given to the orientation of the stator blades for optimum performance. It is found that the power and torque coefficients of the augmented wind turbine are independent of the incident wind speed considered.