984 resultados para proportional mixer
Resumo:
Exchange reactions between molecular complexes and excess acid
or base are well known and have been extensively surveyed in the
literature(l). Since the exchange mechanism will, in some way
involve the breaking of the labile donor-acceptor bond, it follows
that a discussion of the factors relating to bonding in molecular complexes
will be relevant.
In general, a strong Lewis base and a strong Lewis acid form a
stable adduct provided that certain stereochemical requirements are
met.
A strong Lewis base has the following characteristics (1),(2)
(i) high electron density at the donor site.
(ii) a non-bonded electron pair which has a low ionization potential
(iii) electron donating substituents at the donor atom site.
(iv) facile approach of the site of the Lewis base to the
acceptor site as dictated by the steric hindrance of the
substituents.
Examples of typical Lewis bases are ethers, nitriles, ketones,
alcohols, amines and phosphines.
For a strong Lewis acid, the following properties are important:(
i) low electron density at the acceptor site.
(ii) electron withdrawing substituents. (iii) substituents which do not interfere with the close
approach of the Lewis base.
(iv) availability of a vacant orbital capable of accepting
the lone electron pair of the donor atom.
Examples of Lewis acids are the group III and IV halides such
(M=B, AI, Ga, In) and MX4 - (M=Si, Ge, Sn, Pb).
The relative bond strengths of molecular complexes have been
investigated by:-
(i)
(ii)
(iii)
(iv)
(v]
(vi)
dipole moment measurements (3).
shifts of the carbonyl peaks in the IIIR. (4) ,(5), (6) ..
NMR chemical shift data (4),(7),(8),(9).
D.V. and visible spectrophotometric shifts (10),(11).
equilibrium constant data (12), (13).
heats of dissociation and heats of reactions (l~),
(16), (17), (18), (19).
Many experiments have bben carried out on boron trihalides in
order to determine their relative acid strengths. Using pyridine,
nitrobenzene, acetonitrile and trimethylamine as reference Lewis
bases, it was found that the acid strength varied in order:RBx3 >
BC1
3 >BF 3
• For the acetonitrile-boron trihalide and trimethylamine
boron trihalide complexes in nitrobenzene, an-NMR study (7) showed
that the shift to lower field was. greatest for the BB~3 adduct ~n~
smallest for the BF 3 which is in agreement with the acid strengths. If electronegativities of the substituents were the only
important effect, and since c~ Br ,one would expect
the electron density at the boron nucleus to vary as BF3
Resumo:
Cytochrome c oxidase .inserted into proteoliposomes translocates protons with a stoichiometry of approx-, imately 0.4-0.6 H+/e- in the presence of valinomycin plus pottasium. The existance .ofsuchproton translocation is .supportedby experiments with lauryl maltoside which abolished the pulses but~~d not inhibit cyt. c binding .or oxidase turnover. Pulses with K3FeCN6 did not induce acidification further supporting vectorial proton transport by cyt ..aa3 . Upon lowering the ionic strength and pulsing with ferrocytochrome c, H+/eratios increased. This increase is attributed to scaler proton release consequent upon cyt.c-phospholipid binding. Oxygen pulses at low ionic strength however did not exhibit this large scaler increase in H+/e- ratios.A-small increase was observed upon .02 pul'sing at·low ionic strengt.h. This increase was KeN and, ,pcep sensitive and thus possibly due to a redox linked scaler deprotonation. Increases in the H+/e- ratio also occurred ifp~lses ,were performed in the presence of nonactin rather.than valinomycin. The fluorescent pH indicator pyranine was internally trapped inaa3 conta~ning "proteoliposomes. Internal alkalinization, as mon,itored by pyranine fluorescence leads to a of approx.imately 0.35 units, which is proportional to electron flux. This internal alkalinization was also DCCD sensitive, being inhibited by approximately 50%. This 50% inhibition of internal alkalinization supports the existance of vectorial proton transport.
Resumo:
:ofiedian lethal temperatures ( LT50' s ) were determined for rainbow trout, Salmo gairdnerii, acclimated for a minimum of 21 days at 5 c onstant temperatures between 4 and 20 0 C. and 2 diel temperature fluctuations ( sinewave curves of amplitudes ± 4 and ± 7 0 C. about a mean temperature of 12 0 C. ) . Twenty-four-, 48-, and 96-hour LT50 estimates were c alculated f ollowing standard flow-through aquatic bioassay techniques and probi t transformation of mortality data. The phenomenon of delayed thermal mortality was also investigated. Shifts in upper incipient lethal temperature occurred as a result of previous thermal conditioning. It was shown that increases in constant acclimation temperature result in proportional l inear increases in thermal tolerances. The increase i n estimated 96-hour LT50's was approximately 0.13 0 c. X 1 0 C:1 between 8 and 20 0 C. The effect of acclimation to both cyclic temperature regimes was an increase in LT50 to values between the mean and maximum constant equivalent daily temperatures of the cycles. Twenty-four-, 48-, and 96-hour LT50 estimates of both cycles corresponded approximately to the LT50 values of the 16 0 C. c onstant temperature equivalent . This increase i n thermal tolerance was further demonstrated by the delayed thermal mortality experiments . Cycle amplitudes appeared to i nfluence thermal resistance through alterations in initi al mortality since mortality patterns characteristic of base temperature acclimations re-appeared after approximately 68 hours exposure to test temperatures for the 12 + 4 0 C. group, whereas mortality patterns stabilized and remained constant for a period greater than 192 hours with the larger therma l cycle ( 12 + 7 0 C. ). NO s ignificant corre lations between s pecimen weight and time-to-death was apparent. Data are discussed in relation to the establishment of thermal criteria for important commercial and sport fishes , such as the salmonids , as is the question whether previously reported values on lethal temperature s may have been under estimated.
Resumo:
The frequency dependence of the electron-spin fluctuation spectrum, P(Q), is calculated in the finite bandwidth model. We find that for Pd, which has a nearly full d-band, the magnitude, the range, and the peak frequency of P(Q) are greatly reduced from those in the standard spin fluctuation theory. The electron self-energy due to spin fluctuations is calculated within the finite bandwidth model. Vertex corrections are examined, and we find that Migdal's theorem is valid for spin fluctuations in the nearly full band. The conductance of a normal metal-insulator-normal metal tunnel junction is examined when spin fluctuations are present in one electrode. We find that for the nearly full band, the momentum independent self-energy due to spin fluctuations enters the expression for the tunneling conductance with approximately the same weight as the self-energy due to phonons. The effect of spin fluctuations on the tunneling conductance is slight within the finite bandwidth model for Pd. The effect of spin fluctuations on the tunneling conductance of a metal with a less full d-band than Pd may be more pronounced. However, in this case the tunneling conductance is not simply proportional to the self-energy.
Resumo:
A. strain of Drosophila melanog-aster deficient in null amylase activity (Amylase ) was isolated from a wild null population of flies. The survivorship of Amylase homozygous flies is very low when the principal dietary carbohydrate source is starch. However, the survivorship of the null Amylase genotype is comparable to the wild type when the dietary starch is replaced by glucose. In addition, the null viability of the amylase-producing and Amylase strains is comparable v and very lm<] f on a medium with no carbohydrates . Furthermore, amylase-producing genotypes were shovm to excrete enzymatically active amylase protein into the food medium. The excreted amylase causes the external breakdown of dietary starch to sugar. These results led to the following null prediction: the viability of the A.mvlase genotype (fed on a starch rich diet) might increase in the presence of individuals which were amylase-producing. It was shown experimentally that such an increase in viability did in fact occur and that this increase v\Tas proportional to the number of mnylase..::producing fli.es present. These results provide a unique example of a non-"competi ti ve inter-genotype interaction, and one where the underlying physio~ logical and biochemical mechanism has been fully understood.
Resumo:
BACKGROUND: Dyslipidemia is recognized as a major cause of coronary heart disease (CHD). Emerged evidence suggests that the combination of triglycerides (TG) and waist circumference can be used to predict the risk of CHD. However, considering the known limitations of TG, non-high-density lipoprotein (non-HDL = Total cholesterol - HDL cholesterol) cholesterol and waist circumference model may be a better predictor of CHD. PURPOSE: The Framingham Offspring Study data were used to determine if combined non-HDL cholesterol and waist circumference is equivalent to or better than TG and waist circumference (hypertriglyceridemic waist phenotype) in predicting risk of CHD. METHODS: A total of3,196 individuals from Framingham Offspring Study, aged ~ 40 years old, who fasted overnight for ~ 9 hours, and had no missing information on nonHDL cholesterol, TG levels, and waist circumference measurements, were included in the analysis. Receiver Operator Characteristic Curve (ROC) Area Under the Curve (AUC) was used to compare the predictive ability of non-HDL cholesterol and waist circumference and TG and waist circumference. Cox proportional-hazards models were used to examine the association between the joint distributions of non-HDL cholesterol, waist circumference, and non-fatal CHD; TG, waist circumference, and non-fatal CHD; and the joint distribution of non-HDL cholesterol and TG by waist circumference strata, after adjusting for age, gender, smoking, alcohol consumption, diabetes, and hypertension status. RESULTS: The ROC AUC associated with non-HDL cholesterol and waist circumference and TG and waist circumference are 0.6428 (CI: 0.6183, 0.6673) and 0.6299 (CI: 0.6049, 0.6548) respectively. The difference in the ROC AVC is 1.29%. The p-value testing if the difference in the ROC AVCs between the two models is zero is 0.10. There was a strong positive association between non-HDL cholesterol and the risk for non-fatal CHD within each TO levels than that for TO levels within each level of nonHDL cholesterol, especially in individuals with high waist circumference status. CONCLUSION: The results suggest that the model including non-HDL cholesterol and waist circumference may be superior at predicting CHD compared to the model including TO and waist circumference.
Resumo:
The purpose of this study was to test the hypothesis that the potentiation of dynamic function was dependent upon both length change speed and direction. Mouse EDL was cycled in vitro (25º C) about optimal length (Lo) with constant peak strain (± 2.5% Lo) at 1.5, 3.3 and 6.9 Hz before and after a conditioning stimulus. A single pulse was applied during shortening or lengthening and peak dynamic (concentric or eccentric) forces were assessed at Lo. Stimulation increased peak concentric force at all frequencies (range: 19 ± 1 to 30 ± 2%) but this increase was proportional to shortening speed, as were the related changes to concentric work/power (range: -15 ± 1 to 39 ± 1 %). In contrast, stimulation did not increase eccentric force, work or power at any frequency. Thus, results reveal a unique hysteresis like effect for the potentiation of dynamic output wherein concentric and eccentric forces increase and decrease, respectively, with work cycle frequency.
Resumo:
Despite its importance to postsecondary students' success, there is little known about academic advisement in Canada. Academic advising can be a very intensive and demanding job, yet it is not well understood what duties or student populations of advising make it so. On a practical level, this study sought to learn more about academic advisement in Ontario universities and provide a general overview of who advisors are and what they do. This study also investigated academic advising duties and time allocation for these responsibilities in an attempt to relate theory to practice incorporating Vilfredo Pareto's theoretical underpinnings to confirm or negate the applicability of the Pareto Principle in relationship to time utilization by advisors. Essentially this study sought to discover which students require the greatest advisement time and effort, and how advisors could apply these findings to their work. Academic advising professionals in Ontario universities were asked to complete a researcher-designed electronic survey. Quantitative data from the responses were analyzed to describe generalized features of academic advising at Ontario universities. Discussion and implications for practice will prompt advisors and institutions using the results of this study to measure themselves against a provincial assessment. Advisors' awareness of time allocation to different student groups can help focus attention where new strategies are needed to maximize time and efforts. This study found that caseload and time spent with student populations were proportional. Regular undergraduate students accounted for the greatest amount of caseload and time followed by working with students struggling academically. This study highlights the need for further evaluation, education, and research in academic advising in Canadian higher education.
Resumo:
The purpose of this study was to test the hypothesis that the potentiation of dynamic function was dependent upon both length change speed and direction. Mouse EDL was cycled in vitro (250 C) about optimal length (Lo) with constant peak strain (± 2.5% Lo) at 1.5,3.3 and 6.9 Hz before and after a conditioning stimulus. A single pulse was applied during shortening or lengthening and peak dynamic (concentric or eccentric) forces were assessed at Lo. Stimulation increased peak concentric force at all frequencies (range: 19±1 to 30 ± 2%) but this increase was proportional to shortening speed, as were the related changes to concentric work/power (range: -15 ± 1 to 39 ± 1 %). In contrast, stimulation did not increase eccentric force, work or power at any frequency. Thus, results reveal a unique hysteresis like effect for the potentiation of dynamic output wherein concentric and eccentric forces increase and decrease, respectively, with work cycle frequency.
Resumo:
Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.
Resumo:
This paper studies the proposition that an inflation bias can arise in a setup where a central banker with asymmetric preferences targets the natural unemployment rate. Preferences are asymmetric in the sense that positive unemployment deviations from the natural rate are weighted more (or less) severely than negative deviations in the central banker's loss function. The bias is proportional to the conditional variance of unemployment. The time-series predictions of the model are evaluated using data from G7 countries. Econometric estimates support the prediction that the conditional variance of unemployment and the rate of inflation are positively related.
Resumo:
We survey recent axiomatic results in the theory of cost-sharing. In this litterature, a method computes the individual cost shares assigned to the users of a facility for any profile of demands and any monotonic cost function. We discuss two theories taking radically different views of the asymmetries of the cost function. In the full responsibility theory, each agent is accountable for the part of the costs that can be unambiguously separated and attributed to her own demand. In the partial responsibility theory, the asymmetries of the cost function have no bearing on individual cost shares, only the differences in demand levels matter. We describe several invariance and monotonicity properties that reflect both normative and strategic concerns. We uncover a number of logical trade-offs between our axioms, and derive axiomatic characterizations of a handful of intuitive methods: in the full responsibility approach, the Shapley-Shubik, Aumann-Shapley, and subsidyfree serial methods, and in the partial responsibility approach, the cross-subsidizing serial method and the family of quasi-proportional methods.
Resumo:
Affiliation: Mark Daniel: Département de médecine sociale et préventive, Faculté de médecine, Université de Montréal et Centre de recherche du Centre hospitalier de l'Université de Montréal
Resumo:
We ask how the three known mechanisms for solving cost sharing problems with homogeneous cost functions - the value, the proportional, and the serial mechanisms - should be extended to arbitrary problem. We propose the Ordinality axiom, which requires that cost shares be invariante under all transactions preserving the nature of a cost sharing problem.
Resumo:
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins