926 resultados para Scaling Of Chf
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
The ATLAS experiment at the LHC has measured the Higgs boson couplings and mass, and searched for invisible Higgs boson decays, using multiple production and decay channels with up to 4.7 fb−1 of pp collision data at √s=7 TeV and 20.3 fb−1 at √s=8 TeV. In the current study, the measured production and decay rates of the observed Higgs boson in the γγ, ZZ, W W , Zγ, bb, τ τ , and μμ decay channels, along with results from the associated production of a Higgs boson with a top-quark pair, are used to probe the scaling of the couplings with mass. Limits are set on parameters in extensions of the Standard Model including a composite Higgs boson, an additional electroweak singlet, and two-Higgs-doublet models. Together with the measured mass of the scalar Higgs boson in the γγ and ZZ decay modes, a lower limit is set on the pseudoscalar Higgs boson mass of m A > 370 GeV in the “hMSSM” simplified Minimal Supersymmetric Standard Model. Results from direct searches for heavy Higgs bosons are also interpreted in the hMSSM. Direct searches for invisible Higgs boson decays in the vector-boson fusion and associated production of a Higgs boson with W/Z (Z → ℓℓ, W/Z → jj) modes are statistically combined to set an upper limit on the Higgs boson invisible branching ratio of 0.25. The use of the measured visible decay rates in a more general coupling fit improves the upper limit to 0.23, constraining a Higgs portal model of dark matter.
Resumo:
BACKGROUND: Lower body negative pressure (LBNP) has been shown to induce a progressive activation of neurohormonal systems, and a renal tubular and hemodynamic response that mimics the renal adaptation observed in congestive heart failure (CHF). As beta-blockers play an important role in the management of CHF patients, the effects of metoprolol on the renal response were examined in healthy subjects during sustained LBNP. METHODS: Twenty healthy male subjects were randomized in this double blind, placebo versus metoprolol 200 mg once daily, study. After 10 days of treatment, each subject was exposed to 3 levels of LBNP (0, -10, and -20 mbar) for 1 hour, each level of LBNP being separated by 2 days. Neurohormonal profiles, systemic and renal hemodynamics, as well as renal sodium handling were measured before, during, and after LBNP. RESULTS: Blood pressure and heart rate were significantly lower in the metoprolol group throughout the study (P < 0.01). GFR and RPF were similar in both groups at baseline, and no change in renal hemodynamic values was detected at any level of LBNP. However, a reduction in sodium excretion was observed in the placebo group at -20 mbar, whereas no change was detected in the metoprolol group. An increase in plasma renin activity was also observed at -20 mbar in the placebo group that was not observed with metoprolol. CONCLUSION: The beta-blocker metoprolol prevents the sodium retention induced by lower body negative pressure in healthy subjects despite a lower blood pressure. The prevention of sodium retention may be due to a blunting of the neurohormonal response. These effects of metoprolol on the renal response to LBNP may in part explain the beneficial effects of this agent in heart failure patients.
Resumo:
Background: Leptin is produced primarily by adipocytes. Although originally associated with the central regulation of satiety and energy metabolism, increasing evidence indicates that leptin may be an important factor for congestive heart faire (CHF). In the study, we aimed to test the hypothesis that leptin may influence CHF pathophysiology via a pathway of increasing body mass index (BMI). Methods: We studied 2,389 elderly participants aged 70 and older (M; 1161, F: 1228) without CHF and with serum leptin measures at the Health Aging, and Body Composition study. We analyzed the association between serum leptin level and risk of incident CHF using Cox hazard proportional regression models. Elevated leptin level was defined as more than the highest quartile (Q4) of leptin distribution in the total sample for each gender. Adjusted-covariates included demographic, behavior, lipid and inflammation variables (partially-adjusted models), and further included BMI (fully-adjusted models). Results: In a mean 9-year follow-up, 316 participants (13.2%) developed CHF. The partially-adjusted models indicated that men and women with elevated serum leptin levels (>=9.89 ng/ml in men and >=25 ng/ml in women) had significantly higher risks of developing CHF than those with leptin level of less than Q4. The adjusted hazard ratios (95%CI) for incident CHF was 1.49 (1.04 -2.13) in men and 1.71 (1.12 -2.58) in women. However, these associations became non-significant after adjustment for including BMI for each gender. The fully-adjusted hazard ratios (95%CI) were 1.43 (0.94 -2.18) in men and 1.24 (0.77-1.99) in women. Conclusion: Subjects with elevated leptin levels have a higher risk of CHF. The study supports the hypothesis that the influence of leptin level on risk of CHF may be through a pathway related to increasing BMI.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.
Resumo:
The wing of the fruit fly, Drosophila melanogaster, with its simple, two-dimensional structure, is a model organ well suited for a systems biology approach. The wing arises from an epithelial sac referred to as the wing imaginal disc, which undergoes a phase of massive growth and concomitant patterning during larval stages. The Decapentaplegic (Dpp) morphogen plays a central role in wing formation with its ability to co-coordinately regulate patterning and growth. Here, we asked whether the Dpp signaling activity scales, i.e. expands proportionally, with the growing wing imaginal disc. Using new methods for spatial and temporal quantification of Dpp activity and its scaling properties, we found that the Dpp response scales with the size of the growing tissue. Notably, scaling is not perfect at all positions in the field and the scaling of target gene domains is ensured specifically where they define vein positions. We also found that the target gene domains are not defined at constant concentration thresholds of the downstream Dpp activity gradients P-Mad and Brinker. Most interestingly, Pentagone, an important secreted feedback regulator of the pathway, plays a central role in scaling and acts as an expander of the Dpp gradient during disc growth.
Resumo:
The paper reports a detailed experimental study on magnetic relaxation of natural horse-spleen ferritin. ac susceptibility measurements performed on three samples of different concentration show that dipole-dipole interactions between uncompensated moments play no significant role. Furthermore, the distribution of relaxation times in these samples has been obtained from a scaling of experimental X" data, obtained at different frequencies. The average uncompensated magnetic moment per protein is compatible with a disordered arrangement of atomic spins throughout the core, rather than with surface disorder. The observed field dependence of the blocking temperature suggests that magnetic relaxation is faster at zero field than at intermediate field values. This is confirmed by the fact that the magnetic viscosity peaks at zero field, too. Using the distribution of relaxation times obtained independently, we show that these results cannot be explained in terms of classical relaxation theory. The most plausible explanation of these results is the existence, near zero field, of resonant magnetic tunneling between magnetic states of opposite orientation, which are thermally populated.
Resumo:
One of the main problems of bridge maintenance in Iowa is the spalling and scaling of the decks. This problem stems from the continued use of deicing salts during the winter months. Since bridges will frost or freeze more often than roadways, the use of deicing salts on bridges is more frequent. The salt which is spread onto the bridge dissolves in water and permeates into the concrete deck. When the salt reaches the depth of the reinforcing steel and the concentration at that depth reaches the threshold concentration for corrosion (1.5 lbs./yd. 3 ), the steel will begin to oxidize. The oxidizing steel must then expand within the concrete. This expansion eventually forces undersurface fractures and spalls in the concrete. The spalling increases maintenance problems on bridges and in some cases has forced resurfacing after only a few years of service. There are two possible solutions to this problem. One solution is discontinuing the use of salts as the deicing agent on bridges and the other is preventing the salt from reaching or attacking the reinforcing steel. This report deals with one method which stops the salt from reaching the reinforcing steel. The method utilizes a waterproof membrane on the surface of a bridge deck. The waterproof membrane stops the water-salt solution from entering the concrete so the salt cannot reach the reinforcing steel.
Resumo:
Työssä pyrittiin selvittämään syitä kipsin saostumiseen ammoniumsulfaattikiteyttämön putkilämmönvaihtimien pinnalle ja miten epätoivottua saostumista voitaisiin estää. Lämmönvaihtimissa virtaa ammoniumsulfaattia, jossa on epäpuhtautena kalsiumia, joka saostuu pinnoille kalsiumsulfaattina. Kirjallisuusosassa tarkasteltiin kiteytymisen mekanismia ja kipsin kiteytymiseen vaikuttavia tekijöitä. Saostumien estoaineita ja niiden vaikutusta kipsin kiteytymiseen sekä kipsin liukoisuutta ammoniumsulfaattiliuoksessa käsiteltiin myös kirjallisuusosassa. Kipsin kiteytymiseen vaikuttavia tekijöitä selvitettiin laboratoriokokeilla, joissa pyrittiin simuloimaan lämmönvaihdinta lämpövastuksella. Laboratoriokokeissa kokeiltiin erilaisia saostuman estoaineita ja pyrittiin löytämään prosessiin mahdollisimman tehokas kipsin kiteytymisen estoaine. Lämmönvaihtimien toiminnan tehokkuutta eli muodostuneen saostuman vaikutusta lämmönsiirtymiseen tutkittiin veden luovuttaman lämpövirran avulla. Lämmönvaihtimien tukkeutumista selvitettiin putkien vaihdon tarpeen perusteella. Kalsiumpitoisuuden vaihteluja prosessivirroissa selvitettiin kalsiumtaseen avulla. Saostumiseen vaikuttavien tekijöiden lisäksi selvitettiin mistä ja kuinka paljon kalsiumia kulkeutuu prosessiin ja poistuu sieltä. Työn tarkoituksena oli löytää ratkaisu, jolla epätoivottua saostumista lämmönvaihdin-ten pinnoille pystyttäisiin vähentämään joko kemiallisesti tai muuttamalla prosessi-muuttujia. Kalsiumia havaittiin olevan eniten pelkistämön sisäisissä ammoniumsul-faattiliuoskierroissa. Kalsiumtaseen perusteella kalsiumia poistuu pelkistämöltä eniten kipsinä ammoniumsulfaattituotteen mukana. Laboratoriokokeissa havaittiin polykar-boksylaattien estävän kipsin kasvua parhaiten, joskin estoaineen oikealla annostuksel-la havaittiin olevan suuri vaikutus. Lämmönvaihtimien saostuman havaittiin olevan kipsin ja glauberiitin seos. Vaippapuolen luovuttamien lämpövirran arvojen perusteel-la pystyttiin seuraamaan putkien tukkeutumista.
Resumo:
Työn tavoitteena oli tarkastella termohydraulisten koelaitteistojen skaalauksessa käytettäviä periaatteita ja menettelyjä sekä vertailla Apros-simulaattoriohjelmalla laskettuja kanden koelaitteistomallin ja EPR-mallin tuloksia. Tarkoituksena oli saada käsitys siitä, miten hyvin tarkastellut koelaitteistot kuvaavat EPR-Iaitostyypin käyttäytymistä onnettomuustilanteessa. Malleilla tutkittiin jäähdytteen määrän vaikutusta primääripiirin käyttäytymiseen. Koelaitteistomallien tuloksissa toistuvat samat ilmiöt kuin EPR-mallin tuloksissa. Laskettuja PKL-koelaitteistomallin tuloksia vertailtiin myös koelaitteistolla suoritettuun kokeeseen. PKL-mallin todettiin toistavan hyvin kokeen tulokset. Koelaitteistojen tuloksien perusteella kelpoistetaan laskentaohjelmia, joita käytetään ydinvoimalaitosten turvallisuustutkimuksessa. Erityistä harkintaa tulee käyttää koelaitteistojen tulosten hyödyntämisessä, sillä mittakaava vaikuttaa ilmiöiden esiintymiseen.
Resumo:
Dans cette thèse, nous présentons une nouvelle méthode smoothed particle hydrodynamics (SPH) pour la résolution des équations de Navier-Stokes incompressibles, même en présence des forces singulières. Les termes de sources singulières sont traités d'une manière similaire à celle que l'on retrouve dans la méthode Immersed Boundary (IB) de Peskin (2002) ou de la méthode régularisée de Stokeslets (Cortez, 2001). Dans notre schéma numérique, nous mettons en oeuvre une méthode de projection sans pression de second ordre inspirée de Kim et Moin (1985). Ce schéma évite complètement les difficultés qui peuvent être rencontrées avec la prescription des conditions aux frontières de Neumann sur la pression. Nous présentons deux variantes de cette approche: l'une, Lagrangienne, qui est communément utilisée et l'autre, Eulerienne, car nous considérons simplement que les particules SPH sont des points de quadrature où les propriétés du fluide sont calculées, donc, ces points peuvent être laissés fixes dans le temps. Notre méthode SPH est d'abord testée à la résolution du problème de Poiseuille bidimensionnel entre deux plaques infinies et nous effectuons une analyse détaillée de l'erreur des calculs. Pour ce problème, les résultats sont similaires autant lorsque les particules SPH sont libres de se déplacer que lorsqu'elles sont fixes. Nous traitons, par ailleurs, du problème de la dynamique d'une membrane immergée dans un fluide visqueux et incompressible avec notre méthode SPH. La membrane est représentée par une spline cubique le long de laquelle la tension présente dans la membrane est calculée et transmise au fluide environnant. Les équations de Navier-Stokes, avec une force singulière issue de la membrane sont ensuite résolues pour déterminer la vitesse du fluide dans lequel est immergée la membrane. La vitesse du fluide, ainsi obtenue, est interpolée sur l'interface, afin de déterminer son déplacement. Nous discutons des avantages à maintenir les particules SPH fixes au lieu de les laisser libres de se déplacer. Nous appliquons ensuite notre méthode SPH à la simulation des écoulements confinés des solutions de polymères non dilués avec une interaction hydrodynamique et des forces d'exclusion de volume. Le point de départ de l'algorithme est le système couplé des équations de Langevin pour les polymères et le solvant (CLEPS) (voir par exemple Oono et Freed (1981) et Öttinger et Rabin (1989)) décrivant, dans le cas présent, les dynamiques microscopiques d'une solution de polymère en écoulement avec une représentation bille-ressort des macromolécules. Des tests numériques de certains écoulements dans des canaux bidimensionnels révèlent que l'utilisation de la méthode de projection d'ordre deux couplée à des points de quadrature SPH fixes conduit à un ordre de convergence de la vitesse qui est de deux et à une convergence d'ordre sensiblement égale à deux pour la pression, pourvu que la solution soit suffisamment lisse. Dans le cas des calculs à grandes échelles pour les altères et pour les chaînes de bille-ressort, un choix approprié du nombre de particules SPH en fonction du nombre des billes N permet, en l'absence des forces d'exclusion de volume, de montrer que le coût de notre algorithme est d'ordre O(N). Enfin, nous amorçons des calculs tridimensionnels avec notre modèle SPH. Dans cette optique, nous résolvons le problème de l'écoulement de Poiseuille tridimensionnel entre deux plaques parallèles infinies et le problème de l'écoulement de Poiseuille dans une conduite rectangulaire infiniment longue. De plus, nous simulons en dimension trois des écoulements confinés entre deux plaques infinies des solutions de polymères non diluées avec une interaction hydrodynamique et des forces d'exclusion de volume.
Resumo:
The paper reports a detailed experimental study on magnetic relaxation of natural horse-spleen ferritin. ac susceptibility measurements performed on three samples of different concentration show that dipole-dipole interactions between uncompensated moments play no significant role. Furthermore, the distribution of relaxation times in these samples has been obtained from a scaling of experimental X" data, obtained at different frequencies. The average uncompensated magnetic moment per protein is compatible with a disordered arrangement of atomic spins throughout the core, rather than with surface disorder. The observed field dependence of the blocking temperature suggests that magnetic relaxation is faster at zero field than at intermediate field values. This is confirmed by the fact that the magnetic viscosity peaks at zero field, too. Using the distribution of relaxation times obtained independently, we show that these results cannot be explained in terms of classical relaxation theory. The most plausible explanation of these results is the existence, near zero field, of resonant magnetic tunneling between magnetic states of opposite orientation, which are thermally populated.
Resumo:
Scanning Probe Microscopy (SPM) has become of fundamental importance for research in area of micro and nano-technology. The continuous progress in these fields requires ultra sensitive measurements at high speed. The imaging speed limitation of the conventional Tapping Mode SPM is due to the actuation time constant of piezotube feedback loop that keeps the tapping amplitude constant. In order to avoid this limit a deflection sensor and an actuator have to be integrated into the cantilever. In this work has been demonstrated the possibility of realisation of piezoresistive cantilever with an embedded actuator. Piezoresistive detection provides a good alternative to the usual optical laser beam deflection technique. In frames of this thesis has been investigated and modelled the piezoresistive effect in bulk silicon (3D case) for both n- and p-type silicon. Moving towards ultra-sensitive measurements it is necessary to realize ultra-thin piezoresistors, which are well localized to the surface, where the stress magnitude is maximal. New physical effects such as quantum confinement which arise due to the scaling of the piezoresistor thickness was taken into account in order to model the piezoresistive effect and its modification in case of ultra-thin piezoresistor (2D case). The two-dimension character of the electron gas in n-type piezoresistors lead up to decreasing of the piezoresistive coefficients with increasing the degree of electron localisation. Moreover for p-type piezoresistors the predicted values of the piezoresistive coefficients are higher in case of localised holes. Additionally, to the integration of the piezoresistive sensor, actuator integrated into the cantilever is considered as fundamental for realisation of fast SPM imaging. Actuation of the beam is achieved thermally by relying on differences in the coefficients of thermal expansion between aluminum and silicon. In addition the aluminum layer forms the heating micro-resistor, which is able to accept heating impulses with frequency up to one megahertz. Such direct oscillating thermally driven bimorph actuator was studied also with respect to the bimorph actuator efficiency. Higher eigenmodes of the cantilever are used in order to increase the operating frequencies. As a result the scanning speed has been increased due to the decreasing of the actuation time constant. The fundamental limits to force sensitivity that are imposed by piezoresistive deflection sensing technique have been discussed. For imaging in ambient conditions the force sensitivity is limited by the thermo-mechanical cantilever noise. Additional noise sources, connected with the piezoresistive detection are negligible.
Resumo:
Introducción. Las imágenes obtenidas mediante rayos X, determinan una conducta clínica en ortopedia y son analizadas por parte del cirujano en el momento previo a realizar un acto quirúrgico. El planeamiento pre quirúrgico basado en radiografías de cadera, permite predecir el tamaño de los componentes protésicos a utilizar en un reemplazo de cadera. Con el advenimiento de las radiografías digitales, existe la falsa percepción de que estas tienen corregido el factor de magnificación. La corrección de dicho factor requiere un protocolo de calibración de imágenes, aún no implementado en la Fundación Santa Fe de Bogotá (FSFB). Como consecuencia, las radiografías de cadera actualmente resultan magnificadas. Materiales y métodos. Fueron seleccionados 73 pacientes con reemplazo articular total de la cadera intervenidos en la FSFB. Para cada paciente, se estableció la dimensión de la cabeza protésica en la radiografía de cadera obtenida mediante el sistema de radiología digital (PACS-IMPAX) y su tamaño fue comparado con el de la cabeza femoral implantada. Resultados. La concordancia entre los dos observadores al medir la dimensión radiológica de los componentes protésicos fue excelente y el coeficiente de magnificación promedio de 1.2 (20%). Este será introducido al PACS-IMPAX para ajustar el tamaño definitivo de la radiografía. Conclusión. El ajuste del PACS-IMPAX permite obtener radiografías en las cuales se refleja con mayor precisión el tamaño de los segmentos anatómicos y de los componentes protésicos.