969 resultados para Coefficient of determination


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The distributions of different forms of nitrogen in the surface sediments of the southern Huanghai Sea are different and affected by various factors. The contents of IEF-N, SOEF-N and TN gradually decrease eastward, and those of SAEF-N northward, while those of WAEF-N westward. Around the seaport of the old Huanghe (Yellow) River, the contents of both SOEF-N and TN are the highest. Among all the factors, the content of fine sediment is the predominant factor to affect the distributions of different forms of nitrogen. The contents of IEF-N, SOEF-N, and TN have visibly positive correlation with the content of fine sediments, and the correlative coefficient is 0.68, 0.58 and 0.71 respectively, showing that the contents of the three forms of nitrogen increase with those of fine sediments. The content of WAEF-N is related to that of fine sediments to a certain extent, with a correlative coefficient of 0.35; while the content of SAEF-N is not related to that of fine sediments, showing that the content of SAEF-N is not controlled by fine grain-size fractions of sediments. In addition, the distributions of different forms of nitrogen are also interacted one another, and the contents of IEF-N and SOEF-N are obviously affected by TN, while those of inorganic nitrogen (WAEF-N, SAEF-N and IEF-N) are not affected by SOEF-N and TN obviously, although they are interacted each other.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A series of experiments were conducted to identify the factors that affected the growth and survival of the settling flounder larvae Paralichthys olivaceus. Settling larvae 24 days after hatching (DAH) were reared in 10-l experimental tanks up to 40 DAH, and two of the following factors were changed as controlled factors in each experiment: light regime (24L:0D or 12L:12D), prey density (1500, 3000, or 5000 Artemia l(-1)), shelter (sand or no sand) and stocking density (5, 10, or 15 fish l(-1)). Early settling larvae (24-35 DAH) experienced little mortality (less than 10% of the overall mortality) that was not significantly affected by above factors. In contrast, late settling larvae (36-40 DAH) suffered high cannibalistic mortality which was significantly influenced by each of the above factors. Larvae experienced significantly lower mortality at 10 fish l(-1) level than at other densities. Larvae at 15 fish l(-1) level had higher mortality than at 5 fish l(-1) when all other factors were identical. Larvae at 3000 and 5000 Artemia l(-1) treatments survived significantly better than at 1500 Artemia l(-1), but no significant differences in larval mortality were found between the two higher densities. Larvae suffered higher mortality at low prey density or at the absence of sand when they were exposed to longer photoperiod. Low stocking density significantly improved the growth of the settling larvae. The average daily instantaneous growth rate (G) at 5 and 15 fish l(-1) treatments were 0.050 and 0.034, with the coefficient of variation (CV) in final length at 16.4 and 23.5, respectively. Daily instantaneous growth rate increased significantly from 0.033 in the 1500 Artemia l(-1) to 0.041 and 0.045 in the 3000 and 5000 Artennia l(-1), respectively, but no significant difference in larval growth existed between the two higher prey densities. These findings suggested that the optimal prey density for growth and survival of the settling flounder larvae at a stocking density of 5 - 15 fish l(-1) was around 3000 Artemia l(-1) . Larvae that were exposed to 24L showed 20% increase in growth ( G = 0.046, CV = 18.7) than those exposed to 12L ( G = 0.037, CV = 20.5). Longer exposure to light significantly improved larval growth, provided sufficient food was available. Sand substrate did not show significant effects on larval growth, possibly because the larvae spent most of the time swimming or feeding in the water column during this stage. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The meadow ecosystem on the Qinghai-Tibetan Plateau is considered to be sensitive to climate change. An understanding of the alpine meadow ecosystem is therefore important for predicting the response of ecosystems to climate change. In this study, we use the coefficients of variation (Cv) and stability (E) obtained from the Haibei Alpine Meadow Ecosystem Research Station to characterize the ecosystem stability. The results suggest that the net primary production of the alpine meadow ecosystem was more stable (Cv = 13.18%) than annual precipitation (Cv = 16.55%) and annual mean air temperature (Cv= 28.82%). The net primary production was insensitive to either the precipitation (E = 0.0782) or air temperature (E = 0.1113). In summary, the alpine meadow ecosystem on the Qinghai-Tibetan Plateau is much stable. Comparison of alpine meadow ecosystem stability with other five natural grassland ecosystems in Israel and southern African indicates that the alpine meadow ecosystem on the Qinghai-Tibetan Plateau is the most stable ecosystem. The alpine meadow ecosystem with relatively simple structure has high stability, which indicates that community stability is not only correlated with biodiversity and community complicity but also with environmental stability. An average oscillation cycles of 3-4 years existed in annual precipitation, annual mean air temperature, net primary production and the population size of consumers at the Haibei natural ecosystem. The high stability of the alpine meadow ecosystem may be resulting also from the adaptation of the ecosystem to the alpine environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unstable arterial plaque is likely the key component of atherosclerosis, a disease which is responsible for two-thirds of heart attacks and strokes, leading to approximately 1 million deaths in the United States. Ultrasound imaging is able to detect plaque but as of yet is not able to distinguish unstable plaque from stable plaque. In this work a scanning acoustic microscope (SAM) was implemented and validated as tool to measure the acoustic properties of a sample. The goal for the SAM is to be able to provide quantitative measurements of the acoustic properties of different plaque types, to understand the physical basis by which plaque may be identified acoustically. The SAM consists of a spherically focused transducer which operates in pulse-echo mode and is scanned in a 2D raster pattern over a sample. A plane wave analysis is presented which allows the impedance, attenuation and phase velocity of a sample to be de- termined from measurements of the echoes from the front and back of the sample. After the measurements, the attenuation and phase velocity were analysed to ensure that they were consistent with causality. The backscatter coefficient of the samples was obtained using the technique outlined by Chen et al [8]. The transducer used here was able to determine acoustic properties from 10-40 MHz. The results for the impedance, attenuation and phase velocity were validated for high and low-density polyethylene against published results. The plane wave approximation was validated by measuring the properties throughout the focal region and throughout a range of incidence angles from the transducer. The SAM was used to characterize a set of recipes for tissue-mimicking phantoms which demonstrate indepen- dent control over the impedance, attenuation, phase velocity and backscatter coefficient. An initial feasibility study on a human artery was performed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is well documented that the presence of even a few air bubbles in water can signifi- cantly alter the propagation and scattering of sound. Air bubbles are both naturally and artificially generated in all marine environments, especially near the sea surface. The abil- ity to measure the acoustic propagation parameters of bubbly liquids in situ has long been a goal of the underwater acoustics community. One promising solution is a submersible, thick-walled, liquid-filled impedance tube. Recent water-filled impedance tube work was successful at characterizing low void fraction bubbly liquids in the laboratory [1]. This work details the modifications made to the existing impedance tube design to allow for submersed deployment in a controlled environment, such as a large tank or a test pond. As well as being submersible, the useable frequency range of the device is increased from 5 - 9 kHz to 1 - 16 kHz and it does not require any form of calibration. The opening of the new impedance tube is fitted with a large stainless steel flange to better define the boundary condition on the plane of the tube opening. The new device was validated against the classic theoretical result for the complex reflection coefficient of a tube opening fitted with an infinite flange. The complex reflection coefficient was then measured with a bubbly liquid (order 250 micron radius and 0.1 - 0.5 % void fraction) outside the tube opening. Results from the bubbly liquid experiments were inconsistent with flanged tube theory using current bubbly liquid models. The results were more closely matched to unflanged tube theory, suggesting that the high attenuation and phase speeds in the bubbly liquid made the tube opening appear as if it were radiating into free space.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dry mixing of binary food powders was conducted in a 2L lab-scale paddle mixer. Different types of food powders such as paprika, oregano, black pepper, onion powder and salt were used for the studies. A novel method based on a digital colour imaging system (DCI) was developed to measure the mixture quality (MQ) of binary food powder mixtures. The salt conductivity method was also used as an alternative method to measure the MQ. In the first part of the study the DCI method was developed and it showed potential for assessing MQ of binary powder mixes provided there was huge colour difference between the powders. In the second and third part of the study the effect of composition, water content, particle size and bulk density on MQ was studied. Flowability of powders at various moisture contents was also investigated. The mixing behaviour was assessed using coefficient of variation. Results showed that water content and composition influence the mixing behavior of powders. Good mixing was observed up to size ratios of 4.45 and at higher ratios MQ disimproved. The bulk density had a larger influence on the MQ. In the final study the MQ evaluation of binary and ternary powder mixtures was compared by using two methods – salt conductivity method and DCI method. Two binary food and two quaternary food powder mixtures with different coloured ingredients were studied. Overall results showed that DCI method has a potential for use by industries and it can analyse powder mixtures with components that have differences in colour and that are not segregating in nature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Anti-neutrophil cytoplasmic antibodies (ANCA) are diagnostic markers for systemic vasculitis. They are classically I detected by an indirect immunofluorescence test using normal donor neutrophils as substrate. This assay lacks antigenic specificity and is not quantitative. The 'EC/BCR Project for ANCA Assay Standardization' is an international collaboration study with the aim to develop and standardize solid phase assays for ANCA detection. In this part of the study the isolation and characterization of proteinase-3 and myeloperoxidase, the two main target molecules for ANCA, and the development and standardization of ELISAs with these antigens are described. Six laboratories successfully isolated purified proteinase-3 preparations that could be used. Three of these preparations, together with one myeloperoxidase preparation, were subsequently used for ANCA testing by ELISA. The ELISA technique was standardized in two rounds of testing in the 14 participating laboratories. The coefficient of variation of these new assays decreased from values of approx. 50% in the first round to approx. 20% in the second round. We conclude that purified proteinase-3 and myeloperoxidase can be used in standardized ELISAs for ANCA detection. Whether such procedures offer advantages over the IIF test will be determined in a prospective clinical study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Articular cartilage possesses complex mechanical properties that provide healthy joints the ability to bear repeated loads and maintain smooth articulating surfaces over an entire lifetime. In this study, we utilized a fiber-reinforced composite scaffold designed to mimic the anisotropic, nonlinear, and viscoelastic biomechanical characteristics of native cartilage as the basis for developing functional tissue-engineered constructs. Three-dimensionally woven poly(epsilon-caprolactone) (PCL) scaffolds were encapsulated with a fibrin hydrogel, seeded with human adipose-derived stem cells, and cultured for 28 days in chondrogenic culture conditions. Biomechanical testing showed that PCL-based constructs exhibited baseline compressive and shear properties similar to those of native cartilage and maintained these properties throughout the culture period, while supporting the synthesis of a collagen-rich extracellular matrix. Further, constructs displayed an equilibrium coefficient of friction similar to that of native articular cartilage (mu(eq) approximately 0.1-0.3) over the prescribed culture period. Our findings show that three-dimensionally woven PCL-fibrin composite scaffolds can be produced with cartilage-like mechanical properties, and that these engineered properties can be maintained in culture while seeded stem cells regenerate a new, functional tissue construct.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

While the Stokes-Einstein (SE) equation predicts that the diffusion coefficient of a solute will be inversely proportional to the viscosity of the solvent, this relation is commonly known to fail for solutes, which are the same size or smaller than the solvent. Multiple researchers have reported that for small solutes, the diffusion coefficient is inversely proportional to the viscosity to a fractional power, and that solutes actually diffuse faster than SE predicts. For other solvent systems, attractive solute-solvent interactions, such as hydrogen bonding, are known to retard the diffusion of a solute. Some researchers have interpreted the slower diffusion due to hydrogen bonding as resulting from the effective diffusion of a larger complex of a solute and solvent molecules. We have developed and used a novel micropipette technique, which can form and hold a single microdroplet of water while it dissolves in a diffusion controlled environment into the solvent. This method has been used to examine the diffusion of water in both n-alkanes and n-alcohols. It was found that the polar solute water, diffusing in a solvent with which it cannot hydrogen bond, closely resembles small nonpolar solutes such as xenon and krypton diffusing in n-alkanes, with diffusion coefficients ranging from 12.5x10(-5) cm(2)/s for water in n-pentane to 1.15x10(-5) cm(2)/s for water in hexadecane. Diffusion coefficients were found to be inversely proportional to viscosity to a fractional power, and diffusion coefficients were faster than SE predicts. For water diffusing in a solvent (n-alcohols) with which it can hydrogen bond, diffusion coefficient values ranged from 1.75x10(-5) cm(2)/s in n-methanol to 0.364x10(-5) cm(2)/s in n-octanol, and diffusion was slower than an alkane of corresponding viscosity. We find no evidence for solute-solvent complex diffusion. Rather, it is possible that the small solute water may be retarded by relatively longer residence times (compared to non-H-bonding solvents) as it moves through the liquid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As many as 20-70% of patients undergoing breast conserving surgery require repeat surgeries due to a close or positive surgical margin diagnosed post-operatively [1]. Currently there are no widely accepted tools for intra-operative margin assessment which is a significant unmet clinical need. Our group has developed a first-generation optical visible spectral imaging platform to image the molecular composition of breast tumor margins and has tested it clinically in 48 patients in a previously published study [2]. The goal of this paper is to report on the performance metrics of the system and compare it to clinical criteria for intra-operative tumor margin assessment. The system was found to have an average signal to noise ratio (SNR) >100 and <15% error in the extraction of optical properties indicating that there is sufficient SNR to leverage the differences in optical properties between negative and close/positive margins. The probe had a sensing depth of 0.5-2.2 mm over the wavelength range of 450-600 nm which is consistent with the pathologic criterion for clear margins of 0-2 mm. There was <1% cross-talk between adjacent channels of the multi-channel probe which shows that multiple sites can be measured simultaneously with negligible cross-talk between adjacent sites. Lastly, the system and measurement procedure were found to be reproducible when evaluated with repeated measures, with a low coefficient of variation (<0.11). The only aspect of the system not optimized for intra-operative use was the imaging time. The manuscript includes a discussion of how the speed of the system can be improved to work within the time constraints of an intra-operative setting.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thermal-optical analysis is a conventional method for classifying carbonaceous aerosols as organic carbon (OC) and elemental carbon (EC). This article examines the effects of three different temperature protocols on the measured EC. For analyses of parallel punches from the same ambient sample, the protocol with the highest peak helium-mode temperature (870°C) gives the smallest amount of EC, while the protocol with the lowest peak helium-mode temperature (550°C) gives the largest amount of EC. These differences are observed when either sample transmission or reflectance is used to define the OC/EC split. An important issue is the effect of the peak helium-mode temperature on the relative rate at which different types of carbon with different optical properties evolve from the filter. Analyses of solvent-extracted samples are used to demonstrate that high temperatures (870°C) lead to premature EC evolution in the helium-mode. For samples collected in Pittsburgh, this causes the measured EC to be biased low because the attenuation coefficient of pyrolyzed carbon is consistently higher than that of EC. While this problem can be avoided by lowering the peak helium-mode temperature, analyses of wood smoke dominated ambient samples and levoglucosan-spiked filters indicate that too low helium-mode peak temperatures (550°C) allow non-light absorbing carbon to slip into the oxidizing mode of the analysis. If this carbon evolves after the OC/EC split, it biases the EC measurements high. Given the complexity of ambient aerosols, there is unlikely to be a single peak helium-mode temperature at which both of these biases can be avoided. Copyright © American Association for Aerosol Research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A sample of 210 published data sets were assembled that (a) plotted amount remembered versus time, (b) had 5 or more points, and (c) were smooth enough to fit at least 1 of the functions tested with a correlation coefficient of .90 or greater. Each was fit to 105 different 2-parameter functions. The best fits were to the logarithmic function, the power function, the exponential in the square root of time, and the hyperbola in the square root of time. It is difficult to distinguish among these 4 functions with the available data, but the same set of 4 functions fit most data sets, with autobiographical memory being the exception. Theoretical motivations for the best fitting functions are offered. The methodological problems of evaluating functions and the advantages of searching existing data for regularities before formulating theories are considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Insulin-like signaling regulates developmental arrest, stress resistance and lifespan in the nematode Caenorhabditis elegans. However, the genome encodes 40 insulin-like peptides, and the regulation and function of individual peptides is largely uncharacterized. We used the nCounter platform to measure mRNA expression of all 40 insulin-like peptides as well as the insulin-like receptor daf-2, its transcriptional effector daf-16, and the daf-16 target gene sod-3. We validated the platform using 53 RNA samples previously characterized by high density oligonucleotide microarray analysis. For this set of genes and the standard nCounter protocol, sensitivity and precision were comparable between the two platforms. We optimized conditions of the nCounter assay by varying the mass of total RNA used for hybridization, thereby increasing sensitivity up to 50-fold and reducing the median coefficient of variation as much as 4-fold. We used deletion mutants to demonstrate specificity of the assay, and we used optimized conditions to assay insulin-like gene expression throughout the C. elegans life cycle. We detected expression for nearly all insulin-like genes and find that they are expressed in a variety of distinct patterns suggesting complexity of regulation and specificity of function. We identified insulin-like genes that are specifically expressed during developmental arrest, larval development, adulthood and embryogenesis. These results demonstrate that the nCounter platform provides a powerful approach to analyzing insulin-like gene expression dynamics, and they suggest hypotheses about the function of individual insulin-like genes.