889 resultados para Most Productive Scale Size
Resumo:
The size frequency distributions of discrete β-amyloid (Aβ) deposits were studied in single sections of the temporal lobe from patients with Alzheimer's disease. The size distributions were unimodal and positively skewed. In 18/25 (72%) tissues examined, a log normal distribution was a good fit to the data. This suggests that the abundances of deposit sizes are distributed randomly on a log scale about a mean value. Three hypotheses were proposed to account for the data: (1) sectioning in a single plane, (2) growth and disappearance of Aβ deposits, and (3) the origin of Aβ deposits from clusters of neuronal cell bodies. Size distributions obtained by serial reconstruction through the tissue were similar to those observed in single sections, which would not support the first hypothesis. The log normal distribution of Aβ deposit size suggests a model in which the rate of growth of a deposit is proportional to its volume. However, mean deposit size and the ratio of large to small deposits were not positively correlated with patient age or disease duration. The frequency distribution of Aβ deposits which were closely associated with 0, 1, 2, 3, or more neuronal cell bodies deviated significantly from a log normal distribution, which would not support the neuronal origin hypothesis. On the basis of the present data, growth and resolution of Aβ deposits would appear to be the most likely explanation for the log normal size distributions.
Resumo:
The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).
Resumo:
The Analytic Hierarchy Process (AHP) is one of the most popular methods used in Multi-Attribute Decision Making. It provides with ratio-scale measurements of the prioirities of elements on the various leveles of a hierarchy. These priorities are obtained through the pairwise comparisons of elements on one level with reference to each element on the immediate higher level. The Eigenvector Method (EM) and some distance minimizing methods such as the Least Squares Method (LSM), Logarithmic Least Squares Method (LLSM), Weighted Least Squares Method (WLSM) and Chi Squares Method (X2M) are of the tools for computing the priorities of the alternatives. This paper studies a method for generating all the solutions of the LSM problems for 3 × 3 matrices. We observe non-uniqueness and rank reversals by presenting numerical results.
Resumo:
Anthropogenic habitat alterations and water-management practices have imposed an artificial spatial scale onto the once contiguous freshwater marshes of the Florida Everglades. To gain insight into how these changes may affect biotic communities, we examined whether variation in the abundance and community structure of large fishes (SL . 8 cm) in Everglades marshes varied more at regional or intraregional scales, and whether this variation was related to hydroperiod, water depth, floating mat volume, and vegetation density. From October 1997 to October 2002, we used an airboat electrofisher to sample large fishes at sites within three regions of the Everglades. Each of these regions is subject to unique watermanagement schedules. Dry-down events (water depth , 10 cm) occurred at several sites during spring in 1999, 2000, 2001, and 2002. The 2001 dry-down event was the most severe and widespread. Abundance of several fishes decreased significantly through time, and the number of days post-dry-down covaried significantly with abundance for several species. Processes operating at the regional scale appear to play important roles in regulating large fishes. The most pronounced patterns in abundance and community structure occurred at the regional scale, and the effect size for region was greater than the effect size for sites nested within region for abundance of all species combined, all predators combined, and each of the seven most abundant species. Non-metric multi-dimensional scaling revealed distinct groupings of sites corresponding to the three regions. We also found significant variation in community structure through time that correlated with the number of days post-dry-down. Our results suggest that hydroperiod and water management at the regional scale influence large fish communities of Everglades marshes.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Sex change, or sequential hermaphroditism, occurs in the plant and animal kingdoms and often determines a predominance of the first sex. Our aim was to explore changes in sex ratios within the range of the species studied: Patella vulgata and Patella depressa. The broad-scale survey of sex with size of limpets covered a range of latitudes from Zambujeira do Mar (southern Portugal) to the English Channel. Indirect evidence was found for the occurrence of protandry in P. vulgata populations from the south of England, with females predominating in larger size-classes; cumulative frequency distributions of males and females were different; sex ratios were biased towards males and smallest sizes of males were smaller than the smallest sizes of females. In contrast in Portugal females were found in most size-classes of P. vulgata. In P. depressa populations from the south coast of England and Portugal females were interspersed across most size-classes; size distributions of males and females and size at first maturity of males and females did not differ. P. depressa did, however, show some indications of the possibility of slight protandry occurring in Portugal. The test of sex ratio variation with latitude indicated that P. vulgata sex ratios might be involved in determining the species range limit, particularly at the equatorward limit since the likelihood of being male decreased from the south coast of England to southern Portugal. Thus at the southern range limit, sperm could be in short supply due to scarcity of males contributing to an Allee effect.
Resumo:
Sex change, or sequential hermaphroditism, occurs in the plant and animal kingdoms and often determines a predominance of the first sex. Our aim was to explore changes in sex ratios within the range of the species studied: Patella vulgata and Patella depressa. The broad-scale survey of sex with size of limpets covered a range of latitudes from Zambujeira do Mar (southern Portugal) to the English Channel. Indirect evidence was found for the occurrence of protandry in P. vulgata populations from the south of England, with females predominating in larger size-classes; cumulative frequency distributions of males and females were different; sex ratios were biased towards males and smallest sizes of males were smaller than the smallest sizes of females. In contrast in Portugal females were found in most size-classes of P. vulgata. In P. depressa populations from the south coast of England and Portugal females were interspersed across most size-classes; size distributions of males and females and size at first maturity of males and females did not differ. P. depressa did, however, show some indications of the possibility of slight protandry occurring in Portugal. The test of sex ratio variation with latitude indicated that P. vulgata sex ratios might be involved in determining the species range limit, particularly at the equatorward limit since the likelihood of being male decreased from the south coast of England to southern Portugal. Thus at the southern range limit, sperm could be in short supply due to scarcity of males contributing to an Allee effect.
Resumo:
Green energy and Green technology are the most of the quoted terms in the context of modern science and technology. Technology which is close to nature is the necessity of the modern world which is haunted by global warming and climatic alterations. Proper utilization of solar energy is one of the goals of Green Energy Movement. The present thesis deals with the work carried out in the eld of nanotechnology and its possible use in various applications (employing natural dyes) like solar cells. Unlike arti cial dyes, the natural dyes are available, easy to prepare, low in cost, non-toxic, environmentally friendly and fully biodegradable. Looking to the 21st century, the nano/micro sciences will be a chief contributor to scienti c and technological developments. As nanotechnology progresses and complex nanosystems are fabricated, a growing impetus is being given to the development of multi-functional and size-dependent materials. The control of the morphology, from the nano to the micrometer scales, associated with the incorporation of several functionalities can yield entirely new smart hybrid materials. They are special class of materials which provide a new method for the improvement of the environmental stability of the material with interesting optical properties and opening a land of opportunities for applications in the eld of photonics. Zinc oxide (ZnO) is one such multipurpose material that has been explored for applications in sensing, environmental monitoring, and bio-medical systems and communications technology. Understanding the growth mechanism and tailoring their morphology is essential for the use of ZnO crystals as nano/micro electromechanical systems and also as building blocks of other nanosystems.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This study was undertaken in Napoleon gulf, Lake Victoria Uganda from July – December 2009. It was conducted in four landing sites; Bukaya (0.41103N, 33.19133E), Bugungu (0.40216N, 33.2028E), Busana (0.39062N, 33.25228E) and Kikondo (0.3995N, 33.21848E) all from Buikwe district (Formerly part of Mukono district). The main aim was to determine the effect of both hook size and bait type on the catch rate (mean weight) and size composition of Nile perch (Lates niloticus) (LINNE) fishery in the Napoleon Gulf, Lake Victoria. The main hook sizes investigated during the experiment were 7, 8, 9, 10, 11 and 12 that were dominantly used in harvesting Nile perch in Napoleon Gulf, Lake Victoria. In this study length, weight and bait type data were collected on site from each boat at that particular fishing spot; since most fishermen in the Napoleon Gulf could sell their fish immediately the catch is caught there and then. The results indicated a total of 873 Nile perch fish samples collected during the study. Statistical tests, descriptive statistics, regression and correlation were all carried out using the Statistical Package for the Social Sciences (SPSS) in addition to Microsoft excel. The bait types in the Gulf ranged from 5-10 cm Total length (TL) haplochromine, 24.5-27 cm TL Mormyrus kannume and 9-24 cm TL Clarias species. The bait types had a significant effect on the catch rate and also on the size composition the fish harvested measured as Total length (ANCOVA F=8.231; P<0.05) despite the fact that bait type had no influence on mean weight of fish captured (ANCOVA F=2.898; P>0.05). Hook sizes used by the fishers had a significant effect on the both the size (TL) composition (ANCOVA F=3.847; P<0.05) and the mean weight (ANCOVA F=4.599; P<0.005) of the Nile perch captured. Investigations indicated hook sizes seven (7) and eight (8) were the ones that harvested the Nile perch above the slot size of 50 cm total length. In general hook sizes indicated to be the main drive in the harvesting of the Nile perch though bait type also contributed toward that. Generally there is need for management to put a law in place on the minimum hook size to be used on the harvesting of the Nile perch and also monitored by the Fisheries Management as a regulatory measure. In addition to that aquaculture should be encouraged to farm the fish for bait at a higher scale in the region in order to avoid depleting the wild stocks already in danger of extinction. Through this kind of venture, both biodiversity conservation and environmental sustainability will be observed in the Lake Victoria basin.
Resumo:
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome
Resumo:
Póster presentado en: 21st World Hydrogen Energy Conference 2016. Zaragoza, Spain. 13-16th June, 2016
Resumo:
Fishing trials with monofilament gill nets and longlines using small hooks were carried out at the same fishing grounds in Cyclades (Aegean Sea) over 1 year. Four sizes of MUSTAD brand, round bent, flatted sea hooks (Quality 2316 DT, numbers 15, 13, 12 and 11) and four mesh sizes of 22, 24, 26 and 28 turn nominal bar length monofilament gill nets were used. Significant differences in the catch size frequency distributions of the two gears were found for four out of five of the most important species caught by both the gears (Diplodus annularis, Diplodus vulgaris, Pagellus erythrinus, Scorpaena porcus and Serranus cabrilla), with longlines catching larger fish and a wider size range than gill nets. Whereas longline catch size frequency distributions for most species for the different hook sizes were generally highly overlapped, suggesting little or no differences in size selectivity, gill net catch size frequency distributions clearly showed size selection, with larger mesh sizes catching larger fish. A variety of models were fitted to the gill net data, with the lognormal providing the best fit in most cases. A maximum likelihood method was also used to estimate the parameters of the logistic model for the longline data. Because of the highly overlapped longline catch size frequency distributions parameters could only be estimated for two species. This study shows that the two static gears have different impacts in terms of size selection. This information will be useful for the more effective management of these small-scale, multi-species and multi-gear fisheries. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The cranial base, composed of the midline and lateral basicranium, is a structurally important region of the skull associated with several key traits, which has been extensively studied in anthropology and primatology. In particular, most studies have focused on the association between midline cranial base flexion and relative brain size, or encephalization. However, variation in lateral basicranial morphology has been studied less thoroughly. Platyrrhines are a group of primates that experienced a major evolutionary radiation accompanied by extensive morphological diversification in Central and South America over a large temporal scale. Previous studies have also suggested that they underwent several evolutionarily independent processes of encephalization. Given these characteristics, platyrrhines present an excellent opportunity to study, on a large phylogenetic scale, the morphological correlates of primate diversification in brain size. In this study we explore the pattern of variation in basicranial morphology and its relationship with phylogenetic branching and with encephalization in platyrrhines. We quantify variation in the 3D shape of the midline and lateral basicranium and endocranial volumes in a large sample of platyrrhine species, employing high-resolution CT-scans and geometric morphometric techniques. We investigate the relationship between basicranial shape and encephalization using phylogenetic regression methods and calculate a measure of phylogenetic signal in the datasets. The results showed that phylogenetic structure is the most important dimension for understanding platyrrhine cranial base diversification; only Aotus species do not show concordance with our molecular phylogeny. Encephalization was only correlated with midline basicranial flexion, and species that exhibit convergence in their relative brain size do not display convergence in lateral basicranial shape. The evolution of basicranial variation in primates is probably more complex than previously believed, and understanding it will require further studies exploring the complex interactions between encephalization, brain shape, cranial base morphology, and ecological dimensions acting along the species divergence process.
Resumo:
Prey size is an important factor in food consumption. In studies of feeding ecology, prey items are usually measured individually using calipers or ocular micrometers. Among amphibians and reptiles, there are species that feed on large numbers of small prey items (e.g. ants, termites). This high intake makes it difficult to estimate prey size consumed by these animals. We addressed this problem by developing and evaluating a procedure for subsampling the stomach contents of such predators in order to estimate prey size. Specifically, we developed a protocol based on a bootstrap procedure to obtain a subsample with a precision error of at the most 5%, with a confidence level of at least 95%. This guideline should reduce the sampling effort and facilitate future studies on the feeding habits of amphibians and reptiles, and also provide a means of obtaining precise estimates of prey size.