932 resultados para Most Productive Scale Size


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The best accepted method for design of autogenous and semi-autogenous (AG/SAG) mills is to carry out pilot scale test work using a 1.8 m diameter by 0.6 m long pilot scale test mill. The load in such a mill typically contains 250,000-450,000 particles larger than 6 mm, allowing correct representation of more than 90% of the charge in Discrete Element Method (DEM) simulations. Most AG/SAG mills use discharge grate slots which are 15 mm or more in width. The mass in each size fraction usually decreases rapidly below grate size. This scale of DEM model is now within the possible range of standard workstations running an efficient DEM code. This paper describes various ways of extracting collision data front the DEM model and translating it into breakage estimates. Account is taken of the different breakage mechanisms (impact and abrasion) and of the specific impact histories of the particles in order to assess the breakage rates for various size fractions in the mills. At some future time, the integration of smoothed particle hydrodynamics with DEM will allow for the inclusion of slurry within the pilot mill simulation. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The patterns of rock comminution within tumbling mills, as well as the nature of forces, are of significant practical importance. Discrete element modelling (DEM) has been used to analyse the pattern of specific energy applied to rock, in terms of spatial distribution within a pilot AG/SAG mill. We also analysed in some detail the nature of the forces, which may result in rock comminution. In order to examine the distribution of energy applied within the mill, the DEM models were compared with measured particle mass losses, in small scale AG and SAG mill experiments. The intensity of contact stresses was estimated using the Hertz theory of elastic contacts. The results indicate that in the case of the AG mill, the highest intensity stresses and strains are likely to occur deep within the charge, and close to the base. This effect is probably more pronounced for large AG mills. In the SAG mill case, the impacts of the steel balls on the surface of the charge are likely to be the most potent. In both cases, the spatial pattern of medium-to-high energy collisions is affected by the rotational speed of the mill. Based on an assumed damage threshold for rock, in terms of specific energy introduced per single collision, the spatial pattern of productive collisions within each charge was estimated and compared with rates of mass loss. We also investigated the nature of the comminution process within AG vs. SAG mill, in order to explain the observed differences in energy utilisation efficiency, between two types of milling. All experiments were performed using a laboratory scale mill of 1.19 m diameter and 0.31 m length, equipped with 14 square section lifters of height 40 mm. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The spatial heterogeneity in the risk of Ross River virus (family Togaviridae, genus Alphavirus, RRV) disease, the most common mosquito-borne disease in Australia, was examined in Redland Shire in southern Queensland, Australia. Disease cases, complaints from residents of intense mosquito biting exposure, and human population data were mapped using a geographic information system. Surface maps of RRV disease age-sex standardized morbidity ratios and mosquito biting complaint morbidity ratios were created. To determine whether there was significant spatial variation in disease and complaint patterns, a spatial scan analysis method was used to test whether the number of cases and complaints was distributed according to underlying population at risk. Several noncontiguous areas in proximity to productive saline water habitats of Aedes vigilax (Skuse), a recognized vector of RRV, had higher than expected numbers of RRV disease cases and complaints. Disease rates in human populations in areas which had high numbers of adult Ae. vigilax in carbon dioxide- and octenol-baited light traps were up to 2.9 times those in areas that rarely had high numbers of mosquitoes. It was estimated that targeted control of adult Ae. vigilax in these high-risk areas could potentially reduce the RRV disease incidence by an average of 13.6%. Spatial correlation was found between RRV disease risk and complaints from residents of mosquito biting. Based on historical patterns of RRV transmission throughout Redland Shire and estimated future human population growth in areas with higher than average RRV disease incidence, it was estimated that RRV incidence rates will increase by 8% between 2001 and 2021. The use of arbitrary administrative areas that ranged in size from 4.6 to 318.3 km2, has the potential to mask any small scale heterogeneity in disease patterns. With the availability of georeferenced data sets and high-resolution imagery, it is becoming more feasible to undertake spatial analyses at relatively small scales.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To devise more-effective physical activity interventions, the mediating mechanisms yielding behavioral change need to be identified. The Baron-Kenny method is most commonly used. but has low statistical power and May not identify mechanisms of behavioral change in small-to-medium size Studies. More powerful statistical tests are available, Study Design and Setting: Inactive adults (N = 52) were randomized to either a print or a print-plus-telephone intervention. Walking and exercise-related social support Were assessed at baseline, after file intervention, and 4 weeks later. The Baron-Kenny and three alternative methods of mediational analysis (Freedman-Schatzkin; MacKinnon et al.: bootstrap method) were used to examine the effects of social support on initial behavior change and maintenance. Results: A significant mediational effect of social support on initial behavior change was indicated by the MacKinnon et al., bootstrap. and. marginally. Freedman-Schatzkin methods, but not by the Baron-Kenny method. No significant mediational effecl of social support on maintenance of walking was found. Conclusions: Methodologically rigorous intervention studies to identify mediators of change in physical activity are costly and labor intensive, and may not be feasible with large samples. The Use of statistically powerful tests of mediational effects in small-scale studies can inform the development of more effective interventions. (C) 2006 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A study of the hydrodynamics and mass transfer characteristics of a liquid-liquid extraction process in a 450 mm diameter, 4.30 m high Rotating Disc Contactor (R.D.C.) has been undertaken. The literature relating to this type of extractor and the relevant phenomena, such as droplet break-up and coalescence, drop mass transfer and axial mixing has been revjewed. Experiments were performed using the system C1airsol-350-acetone-water and the effects of drop size, drop size-distribution and dispersed phase hold-up on the performance of the R.D.C. established. The results obtained for the two-phase system C1airso1-water have been compared with published correlations: since most of these correlations are based on data obtained from laboratory scale R.D.C.'s, a wide divergence was found. The hydrodynamics data from this study have therefore been correlated to predict the drop size and the dispersed phase hold-up and agreement has been obtained with the experimental data to within +8% for the drop size and +9% for the dispersed phase hold-up. The correlations obtained were modified to include terms involving column dimensions and the data have been correlated with the results obtained from this study together with published data; agreement was generally within +17% for drop size and within +14% for the dispersed phase hold-up. The experimental drop size distributions obtained were in excellent agreement with the upper limit log-normal distributions which should therefore be used in preference to other distribution functions. In the calculation of the overall experimental mass transfer coefficient the mean driving force was determined from the concentration profile along the column using Simpson's Rule and a novel method was developed to calculate the overall theoretical mass transfer coefficient Kca1, involving the drop size distribution diagram to determine the volume percentage of stagnant, circulating and oscillating drops in the sample population. Individual mass transfer coefficients were determined for the corresponding droplet state using different single drop mass transfer models. Kca1 was then calculated as the fractional sum of these individual coefficients and their proportions in the drop sample population. Very good agreement was found between the experimental and theoretical overall mass transfer coefficients. Drop sizes under mass transfer conditions were strongly dependant upon the direction of mass transfer. Drop Sizes in the absence of mass transfer were generally larger than those with solute transfer from the continuous to the dispersed phase, but smaller than those with solute transfer in the opposite direction at corresponding phase flowrates and rotor speed. Under similar operating conditions hold-up was also affected by mass transfer; it was higher when solute transfered from the continuous to the dispersed phase and lower when direction was reversed compared with non-mass transfer operation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The size frequency distributions of discrete β-amyloid (Aβ) deposits were studied in single sections of the temporal lobe from patients with Alzheimer's disease. The size distributions were unimodal and positively skewed. In 18/25 (72%) tissues examined, a log normal distribution was a good fit to the data. This suggests that the abundances of deposit sizes are distributed randomly on a log scale about a mean value. Three hypotheses were proposed to account for the data: (1) sectioning in a single plane, (2) growth and disappearance of Aβ deposits, and (3) the origin of Aβ deposits from clusters of neuronal cell bodies. Size distributions obtained by serial reconstruction through the tissue were similar to those observed in single sections, which would not support the first hypothesis. The log normal distribution of Aβ deposit size suggests a model in which the rate of growth of a deposit is proportional to its volume. However, mean deposit size and the ratio of large to small deposits were not positively correlated with patient age or disease duration. The frequency distribution of Aβ deposits which were closely associated with 0, 1, 2, 3, or more neuronal cell bodies deviated significantly from a log normal distribution, which would not support the neuronal origin hypothesis. On the basis of the present data, growth and resolution of Aβ deposits would appear to be the most likely explanation for the log normal size distributions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Analytic Hierarchy Process (AHP) is one of the most popular methods used in Multi-Attribute Decision Making. It provides with ratio-scale measurements of the prioirities of elements on the various leveles of a hierarchy. These priorities are obtained through the pairwise comparisons of elements on one level with reference to each element on the immediate higher level. The Eigenvector Method (EM) and some distance minimizing methods such as the Least Squares Method (LSM), Logarithmic Least Squares Method (LLSM), Weighted Least Squares Method (WLSM) and Chi Squares Method (X2M) are of the tools for computing the priorities of the alternatives. This paper studies a method for generating all the solutions of the LSM problems for 3 × 3 matrices. We observe non-uniqueness and rank reversals by presenting numerical results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Anthropogenic habitat alterations and water-management practices have imposed an artificial spatial scale onto the once contiguous freshwater marshes of the Florida Everglades. To gain insight into how these changes may affect biotic communities, we examined whether variation in the abundance and community structure of large fishes (SL . 8 cm) in Everglades marshes varied more at regional or intraregional scales, and whether this variation was related to hydroperiod, water depth, floating mat volume, and vegetation density. From October 1997 to October 2002, we used an airboat electrofisher to sample large fishes at sites within three regions of the Everglades. Each of these regions is subject to unique watermanagement schedules. Dry-down events (water depth , 10 cm) occurred at several sites during spring in 1999, 2000, 2001, and 2002. The 2001 dry-down event was the most severe and widespread. Abundance of several fishes decreased significantly through time, and the number of days post-dry-down covaried significantly with abundance for several species. Processes operating at the regional scale appear to play important roles in regulating large fishes. The most pronounced patterns in abundance and community structure occurred at the regional scale, and the effect size for region was greater than the effect size for sites nested within region for abundance of all species combined, all predators combined, and each of the seven most abundant species. Non-metric multi-dimensional scaling revealed distinct groupings of sites corresponding to the three regions. We also found significant variation in community structure through time that correlated with the number of days post-dry-down. Our results suggest that hydroperiod and water management at the regional scale influence large fish communities of Everglades marshes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sex change, or sequential hermaphroditism, occurs in the plant and animal kingdoms and often determines a predominance of the first sex. Our aim was to explore changes in sex ratios within the range of the species studied: Patella vulgata and Patella depressa. The broad-scale survey of sex with size of limpets covered a range of latitudes from Zambujeira do Mar (southern Portugal) to the English Channel. Indirect evidence was found for the occurrence of protandry in P. vulgata populations from the south of England, with females predominating in larger size-classes; cumulative frequency distributions of males and females were different; sex ratios were biased towards males and smallest sizes of males were smaller than the smallest sizes of females. In contrast in Portugal females were found in most size-classes of P. vulgata. In P. depressa populations from the south coast of England and Portugal females were interspersed across most size-classes; size distributions of males and females and size at first maturity of males and females did not differ. P. depressa did, however, show some indications of the possibility of slight protandry occurring in Portugal. The test of sex ratio variation with latitude indicated that P. vulgata sex ratios might be involved in determining the species range limit, particularly at the equatorward limit since the likelihood of being male decreased from the south coast of England to southern Portugal. Thus at the southern range limit, sperm could be in short supply due to scarcity of males contributing to an Allee effect.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sex change, or sequential hermaphroditism, occurs in the plant and animal kingdoms and often determines a predominance of the first sex. Our aim was to explore changes in sex ratios within the range of the species studied: Patella vulgata and Patella depressa. The broad-scale survey of sex with size of limpets covered a range of latitudes from Zambujeira do Mar (southern Portugal) to the English Channel. Indirect evidence was found for the occurrence of protandry in P. vulgata populations from the south of England, with females predominating in larger size-classes; cumulative frequency distributions of males and females were different; sex ratios were biased towards males and smallest sizes of males were smaller than the smallest sizes of females. In contrast in Portugal females were found in most size-classes of P. vulgata. In P. depressa populations from the south coast of England and Portugal females were interspersed across most size-classes; size distributions of males and females and size at first maturity of males and females did not differ. P. depressa did, however, show some indications of the possibility of slight protandry occurring in Portugal. The test of sex ratio variation with latitude indicated that P. vulgata sex ratios might be involved in determining the species range limit, particularly at the equatorward limit since the likelihood of being male decreased from the south coast of England to southern Portugal. Thus at the southern range limit, sperm could be in short supply due to scarcity of males contributing to an Allee effect.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Green energy and Green technology are the most of the quoted terms in the context of modern science and technology. Technology which is close to nature is the necessity of the modern world which is haunted by global warming and climatic alterations. Proper utilization of solar energy is one of the goals of Green Energy Movement. The present thesis deals with the work carried out in the eld of nanotechnology and its possible use in various applications (employing natural dyes) like solar cells. Unlike arti cial dyes, the natural dyes are available, easy to prepare, low in cost, non-toxic, environmentally friendly and fully biodegradable. Looking to the 21st century, the nano/micro sciences will be a chief contributor to scienti c and technological developments. As nanotechnology progresses and complex nanosystems are fabricated, a growing impetus is being given to the development of multi-functional and size-dependent materials. The control of the morphology, from the nano to the micrometer scales, associated with the incorporation of several functionalities can yield entirely new smart hybrid materials. They are special class of materials which provide a new method for the improvement of the environmental stability of the material with interesting optical properties and opening a land of opportunities for applications in the eld of photonics. Zinc oxide (ZnO) is one such multipurpose material that has been explored for applications in sensing, environmental monitoring, and bio-medical systems and communications technology. Understanding the growth mechanism and tailoring their morphology is essential for the use of ZnO crystals as nano/micro electromechanical systems and also as building blocks of other nanosystems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08