923 resultados para Mean Value Theorem
Resumo:
Background and Aims: The objective of the study was to compare data obtained from the Cosmed K4 b2 and the Deltatrac II™ metabolic cart for the purpose of determining the validity of the Cosmed K4 b2 in measuring resting energy expenditure. Methods: Nine adult subjects (four male, five female) were measured. Resting energy expenditure was measured in consecutive sessions using the Cosmed K4 b2, the Deltatrac II™ metabolic cart separately and the Cosmed K4 b2 and Deltatrac II™ metabolic cart simultaneously, performed in random order. Resting energy expenditure (REE) data from both devices were then compared with values obtained from predictive equations. Results: Bland and Altman analysis revealed a mean bias for the four variables, REE, respiratory quotient (RQ), VCO2, VO2 between data obtained from Cosmed K4 b2 and Deltatrac II™ metabolic cart of 268 ± 702 kcal/day, -0.0±0.2, 26.4±118.2 and 51.6±126.5 ml/min, respectively. Corresponding limits of agreement for the same four variables were all large. Also, Bland and Altman analysis revealed a larger mean bias between predicted REE and measured REE using Cosmed K4 b2 data (-194±603 kcal/day) than using Deltatrac™ metabolic cart data (73±197 kcal/day). Conclusions: Variability between the two devices was very high and a degree of measurement error was detected. Data from the Cosmed K4 b2 provided variable results on comparison with predicted values, thus, would seem an invalid device for measuring adults. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Malnutrition is a common problem in children with end-stage liver disease (ESLD), and accurate assessment of nutritional status is essential in managing these children. In a retrospective study, we compared nutritional assessment by anthropometry with that by body composition. We analyzed all consecutive measurements of total body potassium (TBK, n = 186) of children less than 3 years old with ESLD awaiting transplantation found in our database. The TBK values obtained by whole body counting of 40K were compared with reference TRK values of healthy children. The prevalence of malnutrition, as assessed by weight (weight Z score < -2) was 28%, which was significantly lower (chi-square test, p < 0.0001) than the prevalence of malnutrition (76%) assessed by TBK (< 90% of expected TRK for age). These results demonstrated that body weight underestimated the nutritional deficit and stressed the importance of measuring body composition as part of assessing nutritional status of children with ESLD.
Resumo:
Background: The success of orthotopic liver transplantation as treatment for end-stage liver disease has prompted investigation of strategies to maintain or improve nutrition and growth in children awaiting transplantation, because malnutrition is an adverse prognostic factor. The purpose of this study was to evaluate the effect of recombinant human growth hormone therapy on body composition and indices of liver function in patients awaiting transplant. Methods: The study was designed as a placebo- controlled, double-blind, crossover trial. Patients received 0.2 U/kg growth hormone, subcutaneously, or placebo daily for 28 days during two treatment periods, separated by a 2-week washout period. Ten patients (mean age, 3.06 ± 1.15 years; range, 0.51-11.65 years, five men), with extrahepatic biliary atresia (n = 8) or two with Alagille's syndrome (n = 2), with end-stage liver disease, completed the trial while awaiting orthotopic liver transplantation. Height, weight, total body potassium, total body fat, resting energy expenditure, respiratory quotient, hematologic and multiple biochemical profile, number of albumin infusions, insulin-like growth factor-1 and 1, growth hormone binding protein (GHBP), and insulin-like growth factor binding protein-1 (IGFBP-1) and insulin-like growth factor binding protein (IGFBP-3) were measured at the beginning and end of each treatment period. Results: Growth hormone treatment was associated with a significant decline in serum bilirubin (-34.6 ± 16.5 μmol/l vs. 18.2 ± 11.59 μmol/l; p < 0.02) but there was no significant effect on any anthropometric or body composition measurements, or on any biochemical or hematologic parameters. Conclusions: These children with end-stage liver disease displayed growth hormone resistance, particularly in relation to the somatomedin axis. Exogenous growth hormone administration may be of limited value in these patients
Resumo:
This article presents the results of probabilistic seismic hazard analysis (PSHA) for Bangalore, South India. Analyses have been carried out considering the seismotectonic parameters of the region covering a radius of 350 km keeping Bangalore as the center. Seismic hazard parameter `b' has been evaluated considering the available earthquake data using (1) Gutenberg-Richter (G-R) relationship and (2) Kijko and Sellevoll (1989, 1992) method utilizing extreme and complete catalogs. The `b' parameter was estimated to be 0.62 to 0.98 from G-R relation and 0.87 +/- A 0.03 from Kijko and Sellevoll method. The results obtained are a little higher than the `b' values published earlier for southern India. Further, probabilistic seismic hazard analysis for Bangalore region has been carried out considering six seismogenic sources. From the analysis, mean annual rate of exceedance and cumulative probability hazard curve for peak ground acceleration (PGA) and spectral acceleration (Sa) have been generated. The quantified hazard values in terms of the rock level peak ground acceleration (PGA) are mapped for 10% probability of exceedance in 50 years on a grid size of 0.5 km x 0.5 km. In addition, Uniform Hazard Response Spectrum (UHRS) at rock level is also developed for the 5% damping corresponding to 10% probability of exceedance in 50 years. The peak ground acceleration (PGA) value of 0.121 g obtained from the present investigation is slightly lower (but comparable) than the PGA values obtained from the deterministic seismic hazard analysis (DSHA) for the same area. However, the PGA value obtained in the current investigation is higher than PGA values reported in the global seismic hazard assessment program (GSHAP) maps of Bhatia et al. (1999) for the shield area.
Resumo:
The work reported hen was motivated by a desire to verify the existence of structure - specifically MP-rich clusters induced by sodium bromide (NaBr) in the ternary liquid mixture 3-methylpyridine (Mf) + water(W) + NaBr. We present small-angle X-ray scattering (SAXS) measurements in this mixture. These measurements were obtained at room temperature (similar to 298 K) in the one-phase region (below the relevant lower consolute points, T(L)s) at different values of X (i.e., X = 0.02 - 0.17), where X is the weight fraction of NaBr in the mixture. Cluster-size distribution, estimated on the assumption that the clusters are spherical, shows systematic behaviour in that the peak of the distribution shifts rewards larger values of cluster radius as X increases. The largest spatial extent of the clusters (similar to 4.5 nm) is seen at X = 0.17. Data analysis assuming arbitrary shapes and sizes of clusters gives a limiting value of cluster size (- 4.5 nm) that is not very sensitive to X. It is suggested that the cluster size determined may not be the same as the usual critical-point fluctuations far removed from the critical point (T-L). The influence of the additional length scale due to clustering is discussed from the standpoint of crossover from Ising to mean-field critical behaviour, when moving away from the T-L.
Resumo:
Detailed molecular dynamics simulations of Lennard-Jones ellipsoids have been carried out to investigate the emergence of criticality in the single-particle orientational relaxation near the isotropic-nematic (IN) phase transition. The simulations show a sudden appearance of a power-law behavior in the decay of the second-rank orientational relaxation as the IN transition is approached. The simulated value of the power-law exponent is 0.56, which is larger than the mean-field value (0.5) but less than the observed value (0.63) and may be due to the finite size of the simulated system. The decay of the first-rank orientational time correlation function, on the other hand, is nearly exponential but its decay becomes very slow near the isotropic-nematic transition, The zero-frequency rotational friction, calculated from the simulated angular Velocity correlation function, shows a marked increase near the IN transition.
Resumo:
Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.
Resumo:
With the introduction of the PCEHR (Personally Controlled Electronic Health Record), the Australian public is being asked to accept greater responsibility for the management of their health information. However, the implementation of the PCEHR has occasioned poor adoption rates underscored by criticism from stakeholders with concerns about transparency, accountability, privacy, confidentiality, governance, and limited capabilities. This study adopts an ethnographic lens to observe how information is created and used during the patient journey and the social factors impacting on the adoption of the PCEHR at the micro-level in order to develop a conceptual model that will encourage the sharing of patient information within the cycle of care. Objective: This study aims to firstly, establish a basic understanding of healthcare professional attitudes toward a national platform for sharing patient summary information in the form of a PCEHR. Secondly, the studies aims to map the flow of patient related information as it traverses a patient’s personal cycle of care. Thus, an ethnographic approach was used to bring a “real world” lens to information flow in a series of case studies in the Australian healthcare system to discover themes and issues that are important from the patient’s perspective. Design: Qualitative study utilising ethnographic case studies. Setting: Case studies were conducted at primary and allied healthcare professionals located in Brisbane Queensland between October 2013 and July 2014. Results: In the first dimension, it was identified that healthcare professionals’ concerns about trust and medico-legal issues related to patient control and information quality, and the lack of clinical value available with the PCEHR emerged as significant barriers to use. The second dimension of the study which attempted to map patient information flow identified information quality issues, clinical workflow inefficiencies and interoperability misconceptions resulting in duplication of effort, unnecessary manual processes, data quality and integrity issues and an over reliance on the understanding and communication skills of the patient. Conclusion: Opportunities for process efficiencies, improved data quality and increased patient safety emerge with the adoption of an appropriate information sharing platform. More importantly, large scale eHealth initiatives must be aligned with the value proposition of individual stakeholders in order to achieve widespread adoption. Leveraging an Australian national eHealth infrastructure and the PCEHR we offer a practical example of a service driven digital ecosystem suitable for co-creating value in healthcare.
Resumo:
We report an experimental study of a new type of turbulent flow that is driven purely by buoyancy. The flow is due to an unstable density difference, created using brine and water, across the ends of a long (length/diameter = 9) vertical pipe. The Schmidt number Sc is 670, and the Rayleigh number (Ra) based on the density gradient and diameter is about 10(8). Under these conditions the convection is turbulent, and the time-averaged velocity at any point is `zero'. The Reynolds number based on the Taylor microscale, Re-lambda, is about 65. The pipe is long enough for there to be an axially homogeneous region, with a linear density gradient, about 6-7 diameters long in the midlength of the pipe. In the absence of a mean flow and, therefore, mean shear, turbulence is sustained just by buoyancy. The flow can be thus considered to be an axially homogeneous turbulent natural convection driven by a constant (unstable) density gradient. We characterize the flow using flow visualization and particle image velocimetry (PIV). Measurements show that the mean velocities and the Reynolds shear stresses are zero across the cross-section; the root mean squared (r.m.s.) of the vertical velocity is larger than those of the lateral velocities (by about one and half times at the pipe axis). We identify some features of the turbulent flow using velocity correlation maps and the probability density functions of velocities and velocity differences. The flow away from the wall, affected mainly by buoyancy, consists of vertically moving fluid masses continually colliding and interacting, while the flow near the wall appears similar to that in wall-bound shear-free turbulence. The turbulence is anisotropic, with the anisotropy increasing to large values as the wall is approached. A mixing length model with the diameter of the pipe as the length scale predicts well the scalings for velocity fluctuations and the flux. This model implies that the Nusselt number would scale as (RaSc1/2)-Sc-1/2, and the Reynolds number would scale as (RaSc-1/2)-Sc-1/2. The velocity and the flux measurements appear to be consistent with the Ra-1/2 scaling, although it must be pointed out that the Rayleigh number range was less than 10. The Schmidt number was not varied to check the Sc scaling. The fluxes and the Reynolds numbers obtained in the present configuration are Much higher compared to what would be obtained in Rayleigh-Benard (R-B) convection for similar density differences.
Resumo:
Architects regularly employ design as a problem-solving tool in the built environment. Within the design process, architects apply design thinking to reframe problems as opportunities, take advantage of contradictory information to develop new solutions, and differentiate outcomes based on context. This research aims to investigate how design can be better positioned to develop greater differentiated value to an architect’s current service offering, and how design as a strategy could be applied as a driver of business innovation within the Australian architecture industry. The research will explore literature relating to the future of architecture, the application of design thinking, and the benefits of strategic design. The future intent of the research is to develop strategies that improve the value offering of architects, and develop design led solutions that could be applied successfully to the business of architecture.
Resumo:
The light distribution in the disks of many galaxies is ‘lopsided’ with a spatial extent much larger along one half of a galaxy than the other, as seen in M101. Recent observations show that the stellar disk in a typical spiral galaxy is significantly lopsided, indicating asymmetry in the disk mass distribution. The mean amplitude of lopsidedness is 0.1, measured as the Fourier amplitude of the m=1 component normalized to the average value. Thus, lopsidedness is common, and hence it is important to understand its origin and dynamics. This is a new and exciting area in galactic structure and dynamics, in contrast to the topic of bars and two-armed spirals (m=2) which has been extensively studied in the literature. Lopsidedness is ubiquitous and occurs in a variety of settings and tracers. It is seen in both stars and gas, in the outer disk and the central region, in the field and the group galaxies. The lopsided amplitude is higher by a factor of two for galaxies in a group. The lopsidedness has a strong impact on the dynamics of the galaxy, its evolution, the star formation in it, and on the growth of the central black hole and on the nuclear fuelling. We present here an overview of the observations that measure the lopsided distribution, as well as the theoretical progress made so far to understand its origin and properties. The physical mechanisms studied for its origin include tidal encounters, gas accretion and a global gravitational instability. The related open, challenging problems in this emerging area are discussed.
Resumo:
Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)
Resumo:
Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.
Resumo:
Using the dimensional reduction regularization scheme, we show that radiative corrections to the anomaly of the axial current, which is coupled to the gauge field, are absent in a supersymmetric U(1) gauge model for both 't Hooft-Veltman and Bardeen prescriptions for γ5. We also discuss the results with reference to conventional dimensional regularization. This result has significant implications with respect to the renormalizability of supersymmetric models.