959 resultados para anisotropes finite-size scaling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An expression is derived for the probability that the determinant of an n x n matrix over a finite field vanishes; from this it is deduced that for a fixed field this probability tends to 1 as n tends to.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existence of a periodic progressive wave solution to the nonlinear boundary value problem for Rayleigh surface waves of finite amplitude is demonstrated using an extension of the method of strained coordinates. The solution, obtained as a second-order perturbation of the linearized monochromatic Rayleigh wave solution, contains harmonics of all orders of the fundamental frequency. It is shown that the higher harmonic content of the wave increases with amplitude, but the slope of the waveform remains finite so long as the amplitude is less than a critical value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A continuum method of analysis is presented in this paper for the problem of a smooth rigid pin in a finite composite plate subjected to uniaxial loading. The pin could be of interference, push or clearance fit. The plate is idealized to an orthotropic sheet. As the load on the plate is progressively increased, the contact along the pin-hole interface is partial above certain load levels in all three types of fit. In misfit pins (interference or clearance), such situations result in mixed boundary value problems with moving boundaries and in all of them the arc of contact and the stress and displacement fields vary nonlinearly with the applied load. In infinite domains similar problems were analysed earlier by ‘inverse formulation’ and, now, the same approach is selected for finite plates. Finite outer domains introduce analytical complexities in the satisfaction of boundary conditions. These problems are circumvented by adopting a method in which the successive integrals of boundary error functions are equated to zero. Numerical results are presented which bring out the effects of the rectangular geometry and the orthotropic property of the plate. The present solutions are the first step towards the development of special finite elements for fastener joints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural mortality of marine invertebrates is often very high in the early life history stages and decreases in later stages. The possible size-dependent mortality of juvenile banana prawns, P. merguiensis (2-15 mm carapace length) in the Gulf of Carpentaria was investigated. The analysis was based on the data collected at 2-weekly intervals by beam trawls at four sites over a period of six years (between September 1986 and March 1992). It was assumed that mortality was a parametric function of size, rather than a constant. Another complication in estimating mortality for juvenile banana prawns is that a significant proportion of the population emigrates from the study area each year. This effect was accounted for by incorporating the size-frequency pattern of the emigrants in the analysis. Both the extra parameter in the model required to describe the size dependence of mortality, and that used to account for emigration were found to be significantly different from zero, and the instantaneous mortality rate declined from 0.89 week(-1) for 2 mm prawns to 0.02 week(-1) for 15 mm prawns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars

Relevância:

20.00% 20.00%

Publicador:

Resumo:

My dissertation is a corpus-based study of non-finite constructions in Old English (OE). It revisits the question of Latin influence on the OE syntax, offering a new evaluation of syntactic interference between Latin and OE, and, more generally, of the contact situation in the OE period, drawing on methods used in studying grammaticalization and language contact. I address three non-finite constructions: absolute participial construction, accusative-and-infinitive construction, and nominative-and-infinitive construction, exemplified respectively in present-day English as - She looked like a pixie sometimes, her eyes darting here and there, forever watchful (BNC CCM 98); - My first acquaintance with her was when I heard her sing (BNC CFY 2215); - Charles the Bald was said to resemble his grandfather physically (BNC HPT 175). This study compares data from translated texts against the background of original OE writings, establishing dependencies and differences between the two. Although the contrastive analysis of source and target texts is one of the major methods employed in the study, translation and translation strategies as such are only my secondary foci. The emphasis is rather on what source/target comparison can tell us about the OE non-finite syntax and the typological differences between Latin and OE in this domain, and on whether contact-induced change can originate in translation. In terms of theoretical framework, I have adopted functional-typological approach, which rests on the principles of iconicity and event integration, and to the best of my knowledge, has not been applied systematically to OE non-finite constructions. Therefore one more aim of the dissertation is to test this framework and to see how OE fits into the cross-linguistic picture of non-finites. My research corpus consists of two samples: 1) written OE closely dependent on the Latin originals, based on editions of two gloss texts, five translations, and Latin originals of these texts, representing four text types: hymns, religious regulations, homily/life narrative, and biblical narrative (180,622 words); and 2) written OE as far independent from Latin as possible, based on a selection from the York-Toronto-Helsinki Parsed Corpus of Old English Prose (YCOE) and representing five text types: laws, charters, correspondence, chronicle narrative, and homily/life narrative (274,757 words).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.