965 resultados para programming models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a general regression model with an arbitrary and unknown link function and a stochastic selection variable that determines whether the outcome variable is observable or missing. The paper proposes U-statistics that are based on kernel functions as estimators for the directions of the parameter vectors in the link function and the selection equation, and shows that these estimators are consistent and asymptotically normal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QUT (Queensland University of Technology) is a leading university based in the city of Brisbane, Queensland, Australia and is a selectively research intensive university with 2,500 higher degree research students and an overall student population of 45,000 students. The transition from print to online resources is largely completed and the library now provides access to 450,000 print books, 1,000 print journals, 600,000 ebooks, 120,000 ejournals and 100,000 online videos. The ebook collection is now used three times as much as the print book collection. This paper focuses on QUT Library’s ebook strategy and the challenges of building and managing a rapidly growing collection of ebooks using a range of publishers, platforms, and business and financial models. The paper provides an account of QUT Library’s experiences in using Patron Driven Acquisition (PDA) using eBook Library (EBL); the strategic procurement of publisher and subject collections by lease and outright purchase models, the more recent transition to Evidence Based Selection (EBS) options provided by some publishers, and its piloting of etextbook models. The paper provides an in-depth analysis of each of these business models at QUT, focusing on access verses collection development, usage, cost per use, and value for money.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of generalized estimating equation-, (GEEs) has been criticized recently for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. However, the feasibility and efficiency of GEE methods can be enhanced considerably by using flexible families of working correlation models. We propose two ways of constructing unbiased estimating equations from general correlation models for irregularly timed repeated measures to supplement and enhance GEE. The supplementary estimating equations are obtained by differentiation of the Cholesky decomposition of the working correlation, or as score equations for decoupled Gaussian pseudolikelihood. The estimating equations are solved with computational effort equivalent to that required for a first-order GEE. Full details and analytic expressions are developed for a generalized Markovian model that was evaluated through simulation. Large-sample ".sandwich" standard errors for working correlation parameter estimates are derived and shown to have good performance. The proposed estimating functions are further illustrated in an analysis of repeated measures of pulmonary function in children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two algorithms that improve upon the sequent-peak procedure for reservoir capacity calculation are presented. The first incorporates storage-dependent losses (like evaporation losses) exactly as the standard linear programming formulation does. The second extends the first so as to enable designing with less than maximum reliability even when allowable shortfall in any failure year is also specified. Together, the algorithms provide a more accurate, flexible and yet fast method of calculating the storage capacity requirement in preliminary screening and optimization models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method of specifying the syntax of programming languages, known as hierarchical language specifications (HLS), is proposed. Efficient parallel algorithms for parsing languages generated by HLS are presented. These algorithms run on an exclusive-read exclusive-write parallel random-access machine. They require O(n) processors and O(log2n) time, where n is the length of the string to be parsed. The most important feature of these algorithms is that they do not use a stack.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relations for the inner layer potential &fference (E) in the presence of adsorbed orgamc molecules are derived for three hterarchlcal models, m terms of molecular constants like permanent &pole moments, polarlzablhtles, etc It is shown how the experimentally observed patterns of the E vs 0 plots (hnear m all ranges of $\sigma^M$, non-linear in one or both regions of o M, etc ) can be understood in a serm-quantltatlve manner from the simplest model in our hierarchy, viz the two-state site panty version Two-state multi-site and three-state (sxte panty) models are also analysed and the slope (3E/80),,M tabulated for these also The results for the Esm-Markov effect are denved for all the models and compared with the earlier result of Parsons. A comparison with the GSL phenomenologlcal equation is presented and its molecular basis, as well as the hmltatlons, is analysed. In partxcular, two-state multa-slte and three-state (site panty) models yield E-o M relations that are more general than the "umfied" GSL equation The posslblhty of vaewlng the compact layer as a "composite medium" with an "effective dlelectnc constant" and obtaimng novel phenomenological descnptions IS also indicated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pervasive use of the World Wide Web by the general population has created a cultural shift throughout the world. It has enabled more people to share more information about more events and issues than was possible before its general use. As a consequence, it has transformed traditional news media’s approach to almost every aspect of journalism, with many organisations restructuring their philosophy and practice to include a variety of participatory spaces/forums where people are free to engage in deliberative dialogue about matters of public importance. This paper draws from an international collective case study that showcases various approaches to participatory online news journalism during the period 1997–2011 (Adams, 2013). The research finds differences in the ways in which public service, commercial, and independent news media give voice to the public, and ultimately in their approach to journalism’s role as the Fourth Estate––one of the key institutions of democracy. The work is framed by the notion that journalism in democratic societies has a crucial role in ensuring citizens are informed and engaged with public affairs. An examination of four media models, OhmyNews International, News Corp Australia (formerly News Limited), the Guardian and the British Broadcasting Corporation (BBC), showcases the various approaches to participatory online news journalism and how each provides different avenues for citizen engagement. Semistructured in-depth interviews with some of the key senior journalists and editors provide specific information on comparisons between the distinctive practices in each of their employer organisations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

James (1991, Biometrics 47, 1519-1530) constructed unbiased estimating functions for estimating the two parameters in the von Bertalanffy growth curve from tag-recapture data. This paper provides unbiased estimating functions for a class of growth models that incorporate stochastic components and explanatory variables. a simulation study using seasonal growth models indicates that the proposed method works well while the least-squares methods that are commonly used in the literature may produce substantially biased estimates. The proposed model and method are also applied to real data from tagged rack lobsters to assess the possible seasonal effect on growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider models for the rheology of dense, slowly deforming granular materials based of classical and Cosserat plasticity, and their viscoplastic extensions that account for small but finite particle inertia. We determine the scale for the viscosity by expanding the stress in a dimensionless parameter that is a measure of the particle inertia. We write the constitutive relations for classical and Cosserat plasticity in stress-explicit form. The viscoplastic extensions are made by adding a rate-dependent viscous stress to the plasticity stress. We apply the models to plane Couette flow, and show that the classical plasticity and viscoplasticity models have features that depart from experimental observations; the prediction of the Cosserat viscoplasticity model is qualitatively similar to that of Cosserat plasticity, but the viscosities modulate the thickness of the shear layer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Models that implement the bio-physical components of agro-ecosystems are ideally suited for exploring sustainability issues in cropping systems. Sustainability may be represented as a number of objectives to be maximised or minimised. However, the full decision space of these objectives is usually very large and simplifications are necessary to safeguard computational feasibility. Different optimisation approaches have been proposed in the literature, usually based on mathematical programming techniques. Here, we present a search approach based on a multiobjective evaluation technique within an evolutionary algorithm (EA), linked to the APSIM cropping systems model. A simple case study addressing crop choice and sowing rules in North-East Australian cropping systems is used to illustrate the methodology. Sustainability of these systems is evaluated in terms of economic performance and resource use. Due to the limited size of this sample problem, the quality of the EA optimisation can be assessed by comparison to the full problem domain. Results demonstrate that the EA procedure, parameterised with generic parameters from the literature, converges to a useable solution set within a reasonable amount of time. Frontier ‘‘peels’’ or Pareto-optimal solutions as described by the multiobjective evaluation procedure provide useful information for discussion on trade-offs between conflicting objectives.