265 resultados para Size differentiation
Resumo:
Adverse health effects caused by worker exposure to ultrafine particles have been detected in recent years. The scientific community focuses on the assessment of ultrafine aerosols in different microenvironments in order to determine the related worker exposure/dose levels. To this end, particle size distribution measurements have to be taken along with total particle number concentrations. The latter are obtainable through hand-held monitors. A portable particle size distribution analyzer (Nanoscan SMPS 3910, TSI Inc.) was recently commercialized, but so far no metrological assessment has been performed to characterize its performance with respect to well-established laboratory- based instruments such as the scanning mobility particle sizer (SMPS) spectrometer. The present paper compares the aerosol monitoring capability of the Nanoscan SMPS to the laboratory SMPS in order to evaluate whether the Nanoscan SMPS is suitable for field experiments designed to characterize particle exposure in different microenvironments. Tests were performed both in a Marple calm air chamber, where fresh diesel particulate matter and atomized dioctyl phthalate particles were monitored, and in microenvironments, where outdoor, urban, indoor aged, and indoor fresh aerosols were measured. Results show that the Nanoscan SMPS is able to properly measure the particle size distribution for each type of aerosol investigated, but it overestimates the total particle number concentration in the case of fresh aerosols. In particular, the test performed in the Marple chamber showed total concentrations up to twice those measured by the laboratory SMPS—likely because of the inability of the Nanoscan SMPS unipolar charger to properly charge aerosols made up of aggregated particles. Based on these findings, when field test exposure studies are conducted, the Nanoscan SMPS should be used in tandem
Resumo:
Electronic cigarette-generated mainstream aerosols were characterized in terms of particle number concentrations and size distributions through a Condensation Particle Counter and a Fast Mobility Particle Sizer spectrometer, respectively. A thermodilution system was also used to properly sample and dilute the mainstream aerosol. Different types of electronic cigarettes, liquid flavors, liquid nicotine contents, as well as different puffing times were tested. Conventional tobacco cigarettes were also investigated. The total particle number concentration peak (for 2-s puff), averaged across the different electronic cigarette types and liquids, was measured equal to 4.39 ± 0.42 × 109 part. cm−3, then comparable to the conventional cigarette one (3.14 ± 0.61 × 109 part. cm−3). Puffing times and nicotine contents were found to influence the particle concentration, whereas no significant differences were recognized in terms of flavors and types of cigarettes used. Particle number distribution modes of the electronic cigarette-generated aerosol were in the 120–165 nm range, then similar to the conventional cigarette one.
Resumo:
Flos Chrysanthemum is a generic name for a particular group of edible plants, which also have medicinal properties. There are, in fact, twenty to thirty different cultivars, which are commonly used in beverages and for medicinal purposes. In this work, four Flos Chrysanthemum cultivars, Hangju, Taiju, Gongju, and Boju, were collected and chromatographic fingerprints were used to distinguish and assess these cultivars for quality control purposes. Chromatography fingerprints contain chemical information but also often have baseline drifts and peak shifts, which complicate data processing, and adaptive iteratively reweighted, penalized least squares, and correlation optimized warping were applied to correct the fingerprint peaks. The adjusted data were submitted to unsupervised and supervised pattern recognition methods. Principal component analysis was used to qualitatively differentiate the Flos Chrysanthemum cultivars. Partial least squares, continuum power regression, and K-nearest neighbors were used to predict the unknown samples. Finally, the elliptic joint confidence region method was used to evaluate the prediction ability of these models. The partial least squares and continuum power regression methods were shown to best represent the experimental results.
Resumo:
In fisheries managed using individual transferable quotas (ITQs) it is generally assumed that quota markets are well-functioning, allowing quota to flow on either a temporary or permanent basis to those able to make best use of it. However, despite an increasing number of fisheries being managed under ITQs, empirical assessments of the quota markets that have actually evolved in these fisheries remain scarce. The Queensland Coral Reef Fin-Fish Fishery (CRFFF) on the Great Barrier Reef has been managed under a system of ITQs since 2004. Data on individual quota holdings and trades for the period 2004-2012 were used to assess the CRFFF quota market and its evolution through time. Network analysis was applied to assess market structure and the nature of lease-trading relationships. An assessment of market participants’ abilities to balance their quota accounts, i.e., gap analysis, provided insights into market functionality and how this may have changed in the period observed. Trends in ownership and trade were determined, and market participants were identified as belonging to one out of a set of seven generalized types. The emergence of groups such as investors and lease-dependent fishers is clear. In 2011-2012, 41% of coral trout quota was owned by participants that did not fish it, and 64% of total coral trout landings were made by fishers that owned only 10% of the quota. Quota brokers emerged whose influence on the market varied with the bioeconomic conditions of the fishery. Throughout the study period some quota was found to remain inactive, implying potential market inefficiencies. Contribution to this inactivity appeared asymmetrical, with most residing in the hands of smaller quota holders. The importance of transaction costs in the operation of the quota market and the inequalities that may result are discussed in light of these findings
Resumo:
To gain insight into the mechanisms by which the Myb transcription factor controls normal hematopoiesis and particularly, how it contributes to leukemogenesis, we mapped the genome-wide occupancy of Myb by chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) in ERMYB myeloid progenitor cells. By integrating the genome occupancy data with whole genome expression profiling data, we identified a Myb-regulated transcriptional program. Gene signatures for leukemia stem cells, normal hematopoietic stem/progenitor cells and myeloid development were overrepresented in 2368 Myb regulated genes. Of these, Myb bound directly near or within 793 genes. Myb directly activates some genes known critical in maintaining hematopoietic stem cells, such as Gfi1 and Cited2. Importantly, we also show that, despite being usually considered as a transactivator, Myb also functions to repress approximately half of its direct targets, including several key regulators of myeloid differentiation, such as Sfpi1 (also known as Pu.1), Runx1, Junb and Cebpb. Furthermore, our results demonstrate that interaction with p300, an established coactivator for Myb, is unexpectedly required for Myb-mediated transcriptional repression. We propose that the repression of the above mentioned key pro-differentiation factors may contribute essentially to Myb's ability to suppress differentiation and promote self-renewal, thus maintaining progenitor cells in an undifferentiated state and promoting leukemic transformation. © 2011 The Author(s).
Resumo:
Multiple sclerosis is a common disease of the central nervous system in which the interplay between inflammatory and neurodegenerative processes typically results in intermittent neurological disturbance followed by progressive accumulation of disability. Epidemiological studies have shown that genetic factors are primarily responsible for the substantially increased frequency of the disease seen in the relatives of affected individuals, and systematic attempts to identify linkage in multiplex families have confirmed that variation within the major histocompatibility complex (MHC) exerts the greatest individual effect on risk. Modestly powered genome-wide association studies (GWAS) have enabled more than 20 additional risk loci to be identified and have shown that multiple variants exerting modest individual effects have a key role in disease susceptibility. Most of the genetic architecture underlying susceptibility to the disease remains to be defined and is anticipated to require the analysis of sample sizes that are beyond the numbers currently available to individual research groups. In a collaborative GWAS involving 9,772 cases of European descent collected by 23 research groups working in 15 different countries, we have replicated almost all of the previously suggested associations and identified at least a further 29 novel susceptibility loci. Within the MHC we have refined the identity of the HLA-DRB1 risk alleles and confirmed that variation in the HLA-A gene underlies the independent protective effect attributable to the class I region. Immunologically relevant genes are significantly overrepresented among those mapping close to the identified loci and particularly implicate T-helper-cell differentiation in the pathogenesis of multiple sclerosis. © 2011 Macmillan Publishers Limited. All rights reserved.
Resumo:
Background: Body cell mass (BCM) may be estimated in clinical practice to assess functional nutritional status, eg, in patients with anorexia nervosa. Interpretation of the data, especially in younger patients who are still growing, requires appropriate adjustment for size. Previous investigations of this general issue have addressed chemical rather than functional components of body composition and have not considered patients at the extremes of nutritional status, in whom the ability to make longitudinal comparisons is of particular importance. Objective: Our objective was to determine the power by which height should be raised to adjust BCM for height in women of differing nutritional status. Design: BCM was estimated by K-40 counting in 58 healthy women, 33 healthy female adolescents, and 75 female adolescents with anorexia nervosa. The relation between BCM and height was explored in each group by using log-log regression analysis. Results: The powers by which height should be raised to adjust BCM,A,ere 1.73. 1.73, and 2.07 in the women, healthy female adolescents, and anorexic female adolescents, respectively. A simplified version of the index, BCM/height(2), was appropriate for all 3 categories and was negligibly correlated with height. Conclusions: In normal-weight women, the relation between height and BCM is consistent with that reported previously between height and fat-free mass. Although the consistency of the relation between BCM and fat-free mass decreases with increasing weight loss, the relation between height and BCM is not significantly different between normal-weight and underweight women. The index BCM/height(2) is easy to calculate and applicable to both healthy and underweight women. This information may be helpful in interpreting body-composition data in clinical practice.
Resumo:
Atheromatous plaque rupture h the cause of the majority of strokes and heart attacks in the developed world. The role of calcium deposits and their contribution to plaque vulnerability are controversial. Some studies have suggested that calcified plaque tends to be more stable whereas others have suggested the opposite. This study uses a finite element model to evaluate the effect of calcium deposits on the stress within the fibrous cap by varying their location and size. Plaque fibrous cap, lipid pool and calcification were modeled as hyperelastic, Isotropic, (nearly) incompressible materials with different properties for large deformation analysis by assigning time-dependent pressure loading on the lumen wall. The stress and strain contours were illustrated for each condition for comparison. Von Mises stress only increases up to 1.5% when varying the location of calcification in the lipid pool distant to the fibrous cap. Calcification in the fibrous cap leads to a 43% increase of Von Mises stress when compared with that in the lipid pool. An increase of 100% of calcification area leads to a 15% stress increase in the fibrous cap. Calcification in the lipid pool does not increase fibrous cap stress when it is distant to the fibrous cap, whilst large areas of calcification close to or in the fibrous cap may lead to a high stress concentration within the fibrous cap, which may cause plaque rupture. This study highlights the application of a computational model on a simulation of clinical problems, and it may provide insights into the mechanism of plaque rupture.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Natural mortality of marine invertebrates is often very high in the early life history stages and decreases in later stages. The possible size-dependent mortality of juvenile banana prawns, P. merguiensis (2-15 mm carapace length) in the Gulf of Carpentaria was investigated. The analysis was based on the data collected at 2-weekly intervals by beam trawls at four sites over a period of six years (between September 1986 and March 1992). It was assumed that mortality was a parametric function of size, rather than a constant. Another complication in estimating mortality for juvenile banana prawns is that a significant proportion of the population emigrates from the study area each year. This effect was accounted for by incorporating the size-frequency pattern of the emigrants in the analysis. Both the extra parameter in the model required to describe the size dependence of mortality, and that used to account for emigration were found to be significantly different from zero, and the instantaneous mortality rate declined from 0.89 week(-1) for 2 mm prawns to 0.02 week(-1) for 15 mm prawns.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.