967 resultados para Data matrix
Resumo:
Multifrequency bioimpedance analysis has the potential to provide a non-invasive technique for determining body composition in live cattle. A bioimpedance meter developed for use in clinical medicine was adapted and evaluated in 2 experiments using a total of 31 cattle. Prediction equations were obtained for total body water, extracellular body water, intracellular body water, carcass water and carcass protein. There were strong correlations between the results obtained through chemical markers and bioimpedance analysis when determined in cattle that had a wide range of liveweights and conditions. The r(2) values obtained were 0.87 and 0.91 for total body water and extracellular body water respectively. Bioimpedance also correlated with carcass water, measured by chemical analysis (r(2) = 0.72), but less well with carcass protein (r(2) = 0.46). These correlations were improved by inclusion of liveweight and sex as variables in multiple regression analysis. However, the resultant equations were poor predictors of protein and water content in the carcasses of a group of small underfed beef cattle, that had a narrow range of liveweights. In this case, although there was no statistical difference between the predicted and measured values overall, bioimpedance analysis did not detect the differences in carcass protein between the 2 groups that were apparent following chemical analysis. Further work is required to determine the sensitivity of the technique in small underfed cattle, and its potential use in heavier well fed cattle close to slaughter weight.
Resumo:
Multi-frequency bioimpedance analysis (MFBIA) was used to determine the impedance, reactance and resistance of 103 lamb carcasses (17.1-34.2 kg) immediately after slaughter and evisceration. Carcasses were halved, frozen and one half subsequently homogenized and analysed for water, crude protein and fat content. Three measures of carcass length were obtained. Diagonal length between the electrodes (right side biceps femoris to left side of neck) explained a greater proportion of the variance in water mass than did estimates of spinal length and was selected for use in the index L-2/Z to predict the mass of chemical components in the carcass. Use of impedance (Z) measured at the characteristic frequency (Z(c)) instead of 50 kHz (Z(50)) did not improve the power of the model to predict the mass of water, protein or fat in the carcass. While L-2/Z(50) explained a significant proportion of variation in the masses of body water (r(2) 0.64), protein (r(2) 0.34) and fat (r(2) 0.35), its inclusion in multi-variate indices offered small or no increases in predictive capacity when hot carcass weight (HCW) and a measure of rib fat-depth (GR) were present in the model. Optimized equations were able to account for 65-90 % of the variance observed in the weight of chemical components in the carcass. It is concluded that single frequency impedance data do not provide better prediction of carcass composition than can be obtained from measures of HCW and GR. Indices of intracellular water mass derived from impedance at zero frequency and the characteristic frequency explained a similar proportion of the variance in carcass protein mass as did the index L-2/Z(50).
Resumo:
Phylogenies of trematodes based on characters derived from morphology and life cycles have been controversial. Here, we add molecular data to the phylogenetic study of a group of trematodes, members of the superfamily Hemiuroidea Looss, 1899. DNA sequences from the V4 domain of the nuclear small subunit (18S) rRNA gene and a matrix of morphological characters modified from a previous study were used. There was no significant incongruence between the molecular and the morphological data. However, this was probably due largely to the limited resolving power of the morphological data. Analyses support a monophyletic Hemiuroidea containing at least the families Accacoeliidae, Derogenidae, Didymozoidae, Hirudinellidae, Sclerodistomidae, Syncoeliidae, Isoparorchiidae, Lecithasteridae, and Hemiuridae. These families fall into two principal clades. One contains the first six families and the other the Hemiuridae and lecithasterine lecithasterids. The positions of the hysterolecithine lecithasterids and the Isoparorchiidae were poorly resolved. The Ptychogonimidae may be the sister group of the remaining Hemiuroidea, but there was no support from the molecular data for the placement of the Azygiidae within the superfamily. (C) 1998 Academic Press.
Resumo:
The World Health Organization (WHO) MONICA Project is a 10-year study monitoring trends and determinants of cardiovascular disease in geographically defined populations. Data were collected from over 100 000 randomly selected participants in two risk factor surveys conducted approximately 5 years apart in 38 populations using standardized protocols. The net effects of changes in the risk factor levels were estimated using risk scores derived from longitudinal studies in the Nordic countries. The prevalence of cigarette smoking decreased among men in most populations, but the trends for women varied. The prevalence of hypertension declined in two-thirds of the populations. Changes in the prevalence of raised total cholesterol were small but highly correlated between the genders (r = 0.8). The prevalence of obesity increased in three-quarters of the populations for men and in more than half of the populations for women. In almost half of the populations there were statistically significant declines in the estimated coronary risk for both men and women, although for Beijing the risk score increased significantly for both genders. The net effect of the changes in the risk factor levels in the 1980s in most of the study populations of the WHO MONICA Project is that the rates of coronary disease are predicted to decline in the 1990s.
Resumo:
The performance of three analytical methods for multiple-frequency bioelectrical impedance analysis (MFBIA) data was assessed. The methods were the established method of Cole and Cole, the newly proposed method of Siconolfi and co-workers and a modification of this procedure. Method performance was assessed from the adequacy of the curve fitting techniques, as judged by the correlation coefficient and standard error of the estimate, and the accuracy of the different methods in determining the theoretical values of impedance parameters describing a set of model electrical circuits. The experimental data were well fitted by all curve-fitting procedures (r = 0.9 with SEE 0.3 to 3.5% or better for most circuit-procedure combinations). Cole-Cole modelling provided the most accurate estimates of circuit impedance values, generally within 1-2% of the theoretical values, followed by the Siconolfi procedure using a sixth-order polynomial regression (1-6% variation). None of the methods, however, accurately estimated circuit parameters when the measured impedances were low (<20 Omega) reflecting the electronic limits of the impedance meter used. These data suggest that Cole-Cole modelling remains the preferred method for the analysis of MFBIA data.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
We present models for the optical functions of 11 metals used as mirrors and contacts in optoelectronic and optical devices: noble metals (Ag, Au, Cu), aluminum, beryllium, and transition metals (Cr, Ni, Pd, Pt, Ti, W). We used two simple phenomenological models, the Lorentz-Drude (LD) and the Brendel-Bormann (BB), to interpret both the free-electron and the interband parts of the dielectric response of metals in a wide spectral range from 0.1 to 6 eV. Our results show that the BE model was needed to describe appropriately the interband absorption in noble metals, while for Al, Be, and the transition metals both models exhibit good agreement with the experimental data. A comparison with measurements on surface normal structures confirmed that the reflectance and the phase change on reflection from semiconductor-metal interfaces (including the case of metallic multilayers) can be accurately described by use of the proposed models for the optical functions of metallic films and the matrix method for multilayer calculations. (C) 1998 Optical Society of America.
Resumo:
Physiological and kinematic data were collected from elite under-19 rugby union players to provide a greater understanding of the physical demands of rugby union. Heart rate, blood lactate and time-motion analysis data were collected from 24 players (mean +/- s((x) over bar): body mass 88.7 +/- 9.9 kg, height 185 +/- 7 cm, age 18.4 +/- 0.5 years) during six competitive premiership fixtures. Six players were chosen at random from each of four groups: props and locks, back row forwards, inside backs, outside backs. Heart rate records were classified based on percent time spent in four zones (>95%, 85-95%, 75-84%, <75% HRmax). Blood lactate concentration was measured periodically throughout each match, with movements being classified as standing, walking, jogging, cruising, sprinting, utility, rucking/mauling and scrummaging. The heart rate data indicated that props and locks (58.4%) and back row forwards (56.2%) spent significantly more time in high exertion (85-95% HRmax) than inside backs (40.5%) and outside backs (33.9%) (P < 0.001). Inside backs (36.5%) and outside backs (38.5%) spent significantly more time in moderate exertion (75-84% HRmax) than props and locks (22.6%) and back row forwards (19.8%) (P < 0.05). Outside backs (20.1%) spent significantly more time in low exertion (< 75% HRmax) than props and locks (5.8%) and back row forwards (5.6%) (P < 0.05). Mean blood lactate concentration did not differ significantly between groups (range: 4.67 mmol.l(-1) for outside backs to 7.22 mmol.l(-1) for back row forwards; P < 0.05). The motion analysis data indicated that outside backs (5750 m) covered a significantly greater total distance than either props and locks or back row forwards (4400 and 4080 m, respectively; P < 0.05). Inside backs and outside backs covered significantly greater distances walking (1740 and 1780 m, respectively; P < 0.001), in utility movements (417 and 475 m, respectively; P < 0.001) and sprinting (208 and 340 m, respectively; P < 0.001) than either props and locks or back row forwards (walking: 1000 and 991 m; utility movements: 106 and 154 m; sprinting: 72 and 94 m, respectively). Outside backs covered a significantly greater distance sprinting than inside backs (208 and 340 m, respectively; P < 0.001). Forwards maintained a higher level of exertion than backs, due to more constant motion and a large involvement in static high-intensity activities. A mean blood lactate concentration of 4.8-7.2 mmol.l(-1) indicated a need for 'lactate tolerance' training to improve hydrogen ion buffering and facilitate removal following high-intensity efforts. Furthermore, the large distances (4.2-5.6 km) covered during, and intermittent nature of, match-play indicated a need for sound aerobic conditioning in all groups (particularly backs) to minimize fatigue and facilitate recovery between high-intensity efforts.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
Two major factors are likely to impact the utilisation of remotely sensed data in the near future: (1)an increase in the number and availability of commercial and non-commercial image data sets with a range of spatial, spectral and temporal dimensions, and (2) increased access to image display and analysis software through GIS. A framework was developed to provide an objective approach to selecting remotely sensed data sets for specific environmental monitoring problems. Preliminary applications of the framework have provided successful approaches for monitoring disturbed and restored wetlands in southern California.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
In an investigation intended to determine training needs of night crews, Bowers et al. (1998, this issue) report two studies showing that the patterning of communication is a better discriminator of good and poor crews than is the content of communication. Bowers et al. characterize their studies as intended to generate hypotheses for training needs and draw connections with Exploratory Sequential Data Analysis (ESDA). Although applauding the intentions of Bowers ct al., we point out some concerns with their characterization and implementation of ESDA. Our principal concern is that the Bowers et al. exploration of the data does not convincingly lead them back to a better fundamental understanding of the original phenomena they are investigating.
Resumo:
It is recognized that vascular dispersion in the liver is a determinant of high first-pass extraction of solutes by that organ. Such dispersion is also required for translation of in-vitro microsomal activity into in-vivo predictions of hepatic extraction for any solute. We therefore investigated the relative dispersion of albumin transit times (CV2) in the livers of adult and weanling rats and in elasmobranch livers. The mean and normalized variance of the hepatic transit time distribution of albumin was estimated using parametric non-linear regression (with a correction for catheter influence) after an impulse (bolus) input of labelled albumin into a single-pass liver perfusion. The mean +/- s.e. of CV2 for albumin determined in each of the liver groups were 0.85 +/- 0.20 (n = 12), 1.48 +/- 0.33 (n = 7) and 0.90 +/- 0.18 (n = 4) for the livers of adult and weanling rats and elasmobranch livers, respectively. These CV2 are comparable with that reported previously for the dog and suggest that the CV2 Of the liver is of a similar order of magnitude irrespective of the age and morphological development of the species. It might, therefore, be justified, in the absence of other information, to predict the hepatic clearances and availabilities of highly extracted solutes by scaling within and between species livers using hepatic elimination models such as the dispersion model with a CV2 of approximately unity.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.