972 resultados para proportionality constant
Resumo:
ABSTRACT For drip irrigation design and management, it is necessary to know the relation between flow and pressure acting on emitters. In the case of subsurface drip irrigation, the backpressure phenomenon may change the hydraulic characteristics of emitters. Thus, this study aimed at determining such relationship between flow and pressure of different driplines in surface and subsurface conditions; aiming to find possible differences in hydraulic behavior. We tested four emitter types; two pressure compensating (D5000 and Hydro PCND) and two non-pressure compensating (TalDrip and Jardiline). Emitter flow rates were attained in atmospheric conditions and submerged in water, in which submergence levels represented backpressure. Assays were performed using inlet pressures of 80, 100, 120, and 150 kPa for the Hydro PCND dripline and 25, 50, 100, and 150 kPa for the other ones; the backpressures were of 0.49, 1.47, 2.45, 4.41 and 6.37 kPa with four replications. The emitters had their proportionality constants and discharge exponents changed in submerged applications, representing backpressure effect. Non-pressure compensating emitters had their discharge exponent decreased, while in pressure compensating ones, it was increased. Backpressure reduced emitter flow rates at all evaluated pressures.
Resumo:
Rheology has the purpose to study the flux and deformation of materials when submitted to some tension or outer mechanical solicitation. In practice, the effective scientific field broached by rheology is restricted only to the study of homogeneous fluids behavior, in which are included eminent liquids, particles suspensions, and emulsions. The viscosity (η) and the yield stress (τ 0) are the two basic values that define the fluids' behavior. The first one is the proportionality constant that relates the shear rate (γ) with the shear stress (τ) applied, while the second indicates the minimal tension for the flowage beginning. The fluids that obey the Newton's relation - Newtonians fluids - display the constant viscosity and the null yield stress. It's the case of diluted suspensions and grate amount of the pure liquids (water, acetone, alcohol, etc.) in which the viscosity is an intrinsic characteristic that depends on temperature and, in a less significant way, pressure. The suspension, titled Cement Paste, is defined as being a mixture of water and cement with, or without, a superplasticizer additive. The cement paste has a non-Newtonian fluid behavior (pseudoplastic), showing a viscosity that varies in accord to the applied shear stress and significant deformations are obtained from a delimited yield stress. In some cases, systems can also manifest the influence of chemical additives used to modify the interactions fluid/particles, besides the introduced modifications by the presence of incorporated air. To the cement paste the rheometric rehearsals were made using the rheometer R/S Brookfield that controls shear stress and shear rate in accord to the rheological model of Herschel-Bulkley that seems to better adapt to this kind of suspension's behavior. This paper shows the results of rheometrical rehearsals on the cement paste that were produced with cements HOLCIM MC-20 RS and CPV-ARI RS with the addition of superplasticizer additives based of napthaline and polycarboxilate, with and without a constant agitation of the mixture. The obtainment of dosages of superplasticizer additives, as well as the water/cement ratio, at the cement at the fluidify rate determination, was done in a total of 12 different mixtures. It's observed that the rheological parameters seem to vary according to the cement type, the superplasticizer type, and the methodology applied at the fluidity rate determination.
Resumo:
Neurally adjusted ventilatory assist (NAVA) delivers airway pressure (P(aw)) in proportion to the electrical activity of the diaphragm (EAdi) using an adjustable proportionality constant (NAVA level, cm·H(2)O/μV). During systematic increases in the NAVA level, feedback-controlled down-regulation of the EAdi results in a characteristic two-phased response in P(aw) and tidal volume (Vt). The transition from the 1st to the 2nd response phase allows identification of adequate unloading of the respiratory muscles with NAVA (NAVA(AL)). We aimed to develop and validate a mathematical algorithm to identify NAVA(AL). P(aw), Vt, and EAdi were recorded while systematically increasing the NAVA level in 19 adult patients. In a multistep approach, inspiratory P(aw) peaks were first identified by dividing the EAdi into inspiratory portions using Gaussian mixture modeling. Two polynomials were then fitted onto the curves of both P(aw) peaks and Vt. The beginning of the P(aw) and Vt plateaus, and thus NAVA(AL), was identified at the minimum of squared polynomial derivative and polynomial fitting errors. A graphical user interface was developed in the Matlab computing environment. Median NAVA(AL) visually estimated by 18 independent physicians was 2.7 (range 0.4 to 5.8) cm·H(2)O/μV and identified by our model was 2.6 (range 0.6 to 5.0) cm·H(2)O/μV. NAVA(AL) identified by our model was below the range of visually estimated NAVA(AL) in two instances and was above in one instance. We conclude that our model identifies NAVA(AL) in most instances with acceptable accuracy for application in clinical routine and research.
Resumo:
We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.
Resumo:
This study evaluated the response to increasing levels of neurally adjusted ventilatory assist (NAVA), a mode converting electrical activity of the diaphragm (EAdi) into pressure, regulated by a proportionality constant called the NAVA level. Fourteen rabbits were studied during baseline, resistive loading and ramp increases of the NAVA level. EAdi, airway (Paw) and esophageal pressure (Pes), Pes pressure time product (PTPes), breathing pattern, and blood gases were measured. Resistive loading increased PTPes and EAdi. P(a)(CO)(2) increased with high load but not during low load. Increasing NAVA levels increased Paw until a breakpoint where the Paw increase was reduced despite increasing NAVA level. At this breakpoint, Pes, PTPes, EAdi, and P(a)(CO)(2) were similar to baseline. Further increase of the NAVA level reduced Pes, PTPes and EAdi without changes in ventilation. In conclusion, observing the trend in Paw during a ramp increase of the NAVA level allows determination of a level where the inspiratory effort matches unloaded conditions.
Resumo:
The rheoencephalogram (REG) is the change in the electrical impedance of the head that occurs with each heart beat. Without knowledge of the relationship between cerebral blood flow (Q) and the REG, the utility of the REG in the study of the cerebral vasculature is greatly limited. The hypothesis is that the relationship between the REG and Q when venous outflow is nonpulsatile is^ (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI)^ where K is a proportionality constant and Q is the mean Q.^ Pulsatile CBF was measured in the goat via a chronically implanted electromagnetic flowmeter. Electrodes were implanted in the ipsilateral cerebral hemisphere, and the REG was measured with a two electrode impedance plethysmograph. Measurements were made with the animal's head elevated so that venous flow pulsations were not transmitted from the heart to the cerebral veins. Measurements were made under conditions of varied cerebrovascular resistance induced by altering blood CO(,2) levels and under conditions of high and low cerebrospinal fluid pressures. There was a high correlation (r = .922-.983) between the REG calculated from the hypothesized relationship and the measured REG under all conditions.^ Other investigators have proposed that the REG results from linear changes in blood resistivity proportional to blood velocity. There was little to no correlation between the measured REG and the flow velocity ( r = .022-.306). A linear combination of the flow velocity and the hypothesized relationship between the REG and Q did not predict the measured REG significantly better than the hypothesized relationship alone in 37 out of 50 experiments.^ Jacquy proposed an index (F) of cerebral blood flow calculated from amplitudes and latencies of the REG. The F index was highly correlated (r = .929) with measured cerebral blood flow under control and hypercapnic conditions, but was not as highly correlated under conditions of hypocapnia (r = .723) and arterial hypotension (r = .681).^ The results demonstrate that the REG is not determined by mean cerebral blood flow, but by the pulsatile flow only. Thus, the utility of the REG in the determination of mean cerebral blood flow is limited. ^
Resumo:
The content of ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) (Et; EC 4.1.1.39) measured in different-aged leaves of sunflower (Helianthus annuus) and other plants grown under different light intensities, varied from 2 to 75 μmol active sites m−2. Mesophyll conductance (μ) was measured under 1.5% O2, as well as postillumination CO2 uptake (assimilatory charge, a gas-exchange measure of the ribulose-1,5-bisphosphate pool). The dependence of μ on Et saturated at Et = 30 μmol active sites m−2 and μ = 11 mm s−1 in high-light-grown leaves. In low-light-grown leaves the dependence tended toward saturation at similar Et but reached a μ of only 6 to 8 mm s−1. μ was proportional to the assimilatory charge, with the proportionality constant (specific carboxylation efficiency) between 0.04 and 0.075 μm−1 s−1. Our data show that the saturation of the relationship between Et and μ is caused by three limiting components: (a) the physical diffusion resistance (a minor limitation), (b) less than full activation of Rubisco (related to Rubisco activase and the slower diffusibility of Rubisco at high protein concentrations in the stroma), and (c) chloroplast metabolites, especially 3-phosphoglyceric acid and free inorganic phosphate, which control the reaction kinetics of ribulose-1,5-bisphosphate carboxylation by competitive binding to active sites.
Resumo:
This thesis consists of four articles and an introductory section. The main research questions in all the articles are about proportionality and party success in Europe, at European, national or district levels. Proportionality in this thesis denotes the proximity of seat shares parties receive compared to their respective vote shares, after the electoral system’s allocation process. This proportionality can be measured through numerous indices that illustrate either the overall proportionality of an electoral system or a particular election. The correspondence of a single party’s seat shares to its vote shares can also be measured. The overall proportionality is essential in three of the articles (1, 2 and 4), where the system’s performance is studied by means of plots. In article 3, minority party success is measured by advantage-ratios that reveal single party’s winnings or losses in the votes to seat allocation process. The first article asks how proportional are the European parliamentary (EP) electoral systems, how do they compare with results gained from earlier studies and how do the EP electoral systems treat different sized parties. The reasons for different outcomes are looked for in explanations given by traditional electoral studies i.e. electoral system variables. The countries studied (EU15) apply electoral systems that vary in many important aspects, even though a certain amount of uniformity has been aspired to for decades. Since the electoral systems of the EP elections closely resemble the national elections, the same kinds of profiles emerge as in the national elections. The electoral systems indeed treat the parties differentially and six different profile types can be found. The counting method seems to somewhat determine the profile group, but the strongest variables determining the shape of a countries’ profile appears to be the average district magnitude and number of seats allocated to each country. The second article also focuses on overall proportionality performance of an electoral system, but here the focus is on the impact of electoral system changes. I have developed a new method of visualizing some previously used indices and some new indices for this purpose. The aim is to draw a comparable picture of these electoral systems’ changes and their effects. The cases, which illustrate this method, are four elections systems, where a change has occurred in one of the system variables, while the rest remained unchanged. The studied cases include the French, Greek and British European parliamentary systems and the Swedish national parliamentary system. The changed variables are electoral type (plurality changed to PR in the UK), magnitude (France splitting the nationwide district into eight smaller districts), legal threshold (Greece introducing a three percent threshold) and counting method (d’Hondt was changed to modified Sainte-Laguë in Sweden). The radar plots from elections after and before the changes are drawn for all country cases. When quantifying the change, the change in the plots area that is created has also been calculated. Using these radar plots we can observe that the change in electoral system type, magnitude, and also to some extent legal threshold had an effect on overall proportionality and accessibility for small parties, while the change between the two highest averages counting method had none. The third article studies the success minority parties have had in nine electoral systems in European heterogeneous countries. This article aims to add more motivation as to why we should care how different sized parties are treated by the electoral systems. Since many of the parties that aspire to represent minorities in European countries are small, the possibilities for small parties are highlighted. The theory of consociational (or power-sharing) democracy suggests that, in heterogeneous societies, a proportional electoral system will provide the fairest treatment of minority parties. The OSCE Lund Recommendations propose a number of electoral system features, which would improve minority representation. In this article some party variables, namely the unity of the minority parties and the geographical concentration of the minorities were included among possible explanations. The conclusions are that the central points affecting minority success were indeed these non-electoral system variables rather than the electoral system itself. Moreover, the size of the party was a major factor governing success in all the systems investigated; large parties benefited in all the studied electoral systems. In the fourth article the proportionality profiles are again applied, but this time to district level results in Finnish parliamentary elections. The level of proportionality distortion is also studied by way of indices. The average magnitudes during the studied periodrange from 7.5 to 26.2 in the Finnish electoral districts and this opens up unequal opportunities for parties in different districts and affects the shape of the profiles. The intra-country case allows the focus to be placed on the effect of district magnitude, since all other electoral systems are kept constant in an intra-country study. The time span in the study is from 1962 to 2007, i.e. the time that the districts have largely been the same geographically. The plots and indices tell the same story, district magnitude and electoral alliances matter. The district magnitude is connected to the overall proportionality of the electoral districts according to both indices, and the profiles are, as expected, also closer to perfect proportionality in large districts. Alliances have helped some small parties to gain a much higher seat share than their respective vote share and these successes affect some of the profiles. The profiles also show a consistent pattern of benefits for the small parties who ally with the larger parties.
Resumo:
The global temperature response to increasing atmospheric CO2 is often quantified by metrics such as equilibrium climate sensitivity and transient climate response1. These approaches, however, do not account for carbon cycle feedbacks and therefore do not fully represent the net response of the Earth system to anthropogenic CO2 emissions. Climate–carbon modelling experiments have shown that: (1) the warming per unit CO2 emitted does not depend on the background CO2 concentration2; (2) the total allowable emissions for climate stabilization do not depend on the timing of those emissions3, 4, 5; and (3) the temperature response to a pulse of CO2 is approximately constant on timescales of decades to centuries3, 6, 7, 8. Here we generalize these results and show that the carbon–climate response (CCR), defined as the ratio of temperature change to cumulative carbon emissions, is approximately independent of both the atmospheric CO2 concentration and its rate of change on these timescales. From observational constraints, we estimate CCR to be in the range 1.0–2.1 °C per trillion tonnes of carbon (Tt C) emitted (5th to 95th percentiles), consistent with twenty-first-century CCR values simulated by climate–carbon models. Uncertainty in land-use CO2 emissions and aerosol forcing, however, means that higher observationally constrained values cannot be excluded. The CCR, when evaluated from climate–carbon models under idealized conditions, represents a simple yet robust metric for comparing models, which aggregates both climate feedbacks and carbon cycle feedbacks. CCR is also likely to be a useful concept for climate change mitigation and policy; by combining the uncertainties associated with climate sensitivity, carbon sinks and climate–carbon feedbacks into a single quantity, the CCR allows CO2-induced global mean temperature change to be inferred directly from cumulative carbon emissions.
Resumo:
We demonstrate that the short-range spin correlator < S(i)center dot S(j)>, a fundamental measure of the interaction between adjacent spins, can be directly measured in certain insulating magnets. We present magnetostriction data for the insulating organic compound NiCl(2)-4SC(NH(2))(2), and show that the magnetostriction as a function of field is proportional to the dominant short-range spin correlator. Furthermore, the constant of proportionality between the magnetostriction and the spin correlator gives information about the spin-lattice interaction. Combining these results with the measured Young's modulus, we are able to extract dJ/dz, the dependence of the superexchange constant J on the Ni interionic distance z.
Resumo:
de Souza Jr, TP, Fleck, SJ, Simao, R, Dubas, JP, Pereira, B, de Brito Pacheco, EM, da Silva, AC, and de Oliveira, PR. Comparison between constant and decreasing rest intervals: influence on maximal strength and hypertrophy. J Strength Cond Res 24(7): 1843-1850, 2010-Most resistance training programs use constant rest period lengths between sets and exercises, but some programs use decreasing rest period lengths as training progresses. The aim of this study was to compare the effect on strength and hypertrophy of 8 weeks of resistance training using constant rest intervals (CIs) and decreasing rest intervals (DIs) between sets and exercises. Twenty young men recreationally trained in strength training were randomly assigned to either a CI or DI training group. During the first 2 weeks of training, 3 sets of 10-12 repetition maximum (RM) with 2-minute rest intervals between sets and exercises were performed by both groups. During the next 6 weeks of training, the CI group trained using 2 minutes between sets and exercises (4 sets of 8-10RM), and the DI group trained with DIs (2 minutes decreasing to 30 seconds) as the 6 weeks of training progressed (4 sets of 8-10RM). Total training volume of the bench press and squat were significantly lower for the DI compared to the CI group (bench press 9.4%, squat 13.9%) and weekly training volume of these same exercises was lower in the DI group from weeks 6 to 8 of training. Strength (1RM) in the bench press and squat, knee extensor and flexor isokinetic measures of peak torque, and muscle cross-sectional area (CSA) using magnetic resonance imaging were assessed pretraining and posttraining. No significant differences (p <= 0.05) were shown between the CI and DI training protocols for CSA (arm 13.8 vs. 14.5%, thigh 16.6 vs. 16.3%), 1RM (bench press 28 vs. 37%, squat 34 vs. 34%), and isokinetic peak torque. In conclusion, the results indicate that a training protocol with DI is just as effective as a CI protocol over short training periods (6 weeks) for increasing maximal strength and muscle CSA; thus, either type of program can be used over a short training period to cause strength and hypertrophy.
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
Despite the frequent use of stepping motors in robotics, automation, and a variety of precision instruments, they can hardly be found in rotational viscometers. This paper proposes the use of a stepping motor to drive a conventional constant-shear-rate laboratory rotational viscometer to avoid the use of velocity sensor and gearbox and, thus, simplify the instrument design. To investigate this driving technique, a commercial rotating viscometer has been adapted to be driven by a bipolar stepping motor, which is controlled via a personal computer. Special circuitry has been added to microstep the stepping motor at selectable step sizes and to condition the torque signal. Tests have been carried out using the prototype to produce flow curves for two standard Newtonian fluids (920 and 12 560 mPa (.) s, both at 25 degrees C). The flow curves have been obtained by employing several distinct microstep sizes within the shear rate range of 50-500 s(-1). The results indicate the feasibility of the proposed driving technique.
Resumo:
The micro-scale abrasive wear test by rotative ball has gained large acceptance in universities and research centers, being widely used in studies on the abrasive wear of materials. Two wear modes are usually observed in this type of test: ""rolling abrasion"" results when the abrasive particles roll on the surface of the tested specimen, while ""grooving abrasion"" is observed when the abrasive particles slide; the type of wear mode has a significant effect on the overall behaviour of a tribological system. Several works on the friction coefficient during abrasive wear tests are available in the literature, but only a few were dedicated to the friction coefficient in micro-abrasive wear tests conducted with rotating ball. Additionally, recent works have identified that results may also be affected by the change in contact pressure that occurs when tests are conducted with constant applied force. Thus, the purpose of this work is to study the relationship between friction coefficient and abrasive wear modes in ball-cratering wear tests conducted at ""constant normal force"" and ""constant pressure"". Micro-scale abrasive wear tests were conducted with a ball of AISI52100 steel and a specimen of AISIH10 tool steel. The abrasive slurry was prepared with black silicon carbide (SiC) particles (average particle size of 3 mu m) and distilled water. Two constant normal force values and two constant pressure values were selected for the tests. The tangential and normal loads were monitored throughout the tests and their ratio was calculated to provide an indication of the friction coefficient. In all cases, optical microscopy analysis of the worn craters revelated only the presence of grooving abrasion. However, a more detailed analysis conducted by SEM has indicated that different degrees of rolling abrasion have also occurred along the grooves. The results have also shown that: (i) for the selected values of constant normal force and constant pressure, the friction coefficient presents, approximately, the same range of values and (ii) loading conditions play an important role on the occurrence of rolling abrasion or grooving abrasion and, consequently, on the average value and scatter of the friction coefficient in micro-abrasive wear tests. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Due to its outstanding flexibility, batch distillation is still widely used in many separation processes. In the present work, a comparison between constant and variable reflux operations is studied. Firstly, a mathematical model is developed and then validated through comparison between predicted and experimental results accomplished in a lab-scale apparatus. Therefore, case studies are performed through mathematical simulations. It is noted that the most economical form of batch distillation is at constant overhead product composition, keeping the flow rate of vapor from the top of the column constant. (C) 2010 Elsevier B.V. All rights reserved.