647 resultados para attack models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

QUT (Queensland University of Technology) is a leading university based in the city of Brisbane, Queensland, Australia and is a selectively research intensive university with 2,500 higher degree research students and an overall student population of 45,000 students. The transition from print to online resources is largely completed and the library now provides access to 450,000 print books, 1,000 print journals, 600,000 ebooks, 120,000 ejournals and 100,000 online videos. The ebook collection is now used three times as much as the print book collection. This paper focuses on QUT Library’s ebook strategy and the challenges of building and managing a rapidly growing collection of ebooks using a range of publishers, platforms, and business and financial models. The paper provides an account of QUT Library’s experiences in using Patron Driven Acquisition (PDA) using eBook Library (EBL); the strategic procurement of publisher and subject collections by lease and outright purchase models, the more recent transition to Evidence Based Selection (EBS) options provided by some publishers, and its piloting of etextbook models. The paper provides an in-depth analysis of each of these business models at QUT, focusing on access verses collection development, usage, cost per use, and value for money.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of generalized estimating equation-, (GEEs) has been criticized recently for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. However, the feasibility and efficiency of GEE methods can be enhanced considerably by using flexible families of working correlation models. We propose two ways of constructing unbiased estimating equations from general correlation models for irregularly timed repeated measures to supplement and enhance GEE. The supplementary estimating equations are obtained by differentiation of the Cholesky decomposition of the working correlation, or as score equations for decoupled Gaussian pseudolikelihood. The estimating equations are solved with computational effort equivalent to that required for a first-order GEE. Full details and analytic expressions are developed for a generalized Markovian model that was evaluated through simulation. Large-sample ".sandwich" standard errors for working correlation parameter estimates are derived and shown to have good performance. The proposed estimating functions are further illustrated in an analysis of repeated measures of pulmonary function in children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pervasive use of the World Wide Web by the general population has created a cultural shift throughout the world. It has enabled more people to share more information about more events and issues than was possible before its general use. As a consequence, it has transformed traditional news media’s approach to almost every aspect of journalism, with many organisations restructuring their philosophy and practice to include a variety of participatory spaces/forums where people are free to engage in deliberative dialogue about matters of public importance. This paper draws from an international collective case study that showcases various approaches to participatory online news journalism during the period 1997–2011 (Adams, 2013). The research finds differences in the ways in which public service, commercial, and independent news media give voice to the public, and ultimately in their approach to journalism’s role as the Fourth Estate––one of the key institutions of democracy. The work is framed by the notion that journalism in democratic societies has a crucial role in ensuring citizens are informed and engaged with public affairs. An examination of four media models, OhmyNews International, News Corp Australia (formerly News Limited), the Guardian and the British Broadcasting Corporation (BBC), showcases the various approaches to participatory online news journalism and how each provides different avenues for citizen engagement. Semistructured in-depth interviews with some of the key senior journalists and editors provide specific information on comparisons between the distinctive practices in each of their employer organisations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

James (1991, Biometrics 47, 1519-1530) constructed unbiased estimating functions for estimating the two parameters in the von Bertalanffy growth curve from tag-recapture data. This paper provides unbiased estimating functions for a class of growth models that incorporate stochastic components and explanatory variables. a simulation study using seasonal growth models indicates that the proposed method works well while the least-squares methods that are commonly used in the literature may produce substantially biased estimates. The proposed model and method are also applied to real data from tagged rack lobsters to assess the possible seasonal effect on growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate unconditional skewness. We consider modeling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional distribution exhibits skewness and nonzero third-order autocovariance structure. In this respect, an asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible, explicit analytical expressions provided for all third-order moments and cross-moments. Finally, we introduce a new tool, the shock impact curve, for investigating the impact of shocks on the conditional mean squared error of return series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public-Private Partnerships (PPP) are established globally as an important mode of procurement and the features of PPP, not least of which the transfer of risk, appeal to governments and particularly in the current economic climate. There are many other advantages of PPP that are claimed as outweighing the costs of PPP and affording Value for Money (VfM) relative to traditionally financed projects or non-PPP. That said, it is the case that we lack comparative whole-life empirical studies of VfM in PPP and non-PPP. Whilst we await this kind of study, the pace and trajectory of PPP seem set to continue and so in the meantime, the virtues of seeking to improve PPP appear incontrovertible. The decision about which projects, or parts of projects, to offer to the market as a PPP and the decision concerning the allocation or sharing risks as part of engagement of the PPP consortium are among the most fundamental decisions that determine whether PPP deliver VfM. The focus in the paper is on latter decision concerning governments’ attitudes towards risk and more specifically, the effect of this decision on the nature of the emergent PPP consortium, or PPP model, including its economic behavior and outcomes. This paper presents an exploration into the extent to which the seemingly incompatible alternatives of risk allocation and risk sharing, represented by the orthodox/conventional PPP model and the heterodox/alliance PPP model respectively, can be reconciled along with suggestions for new research directions to inform this reconciliation. In so doing, an important step is taken towards charting a path by which governments can harness the relative strengths of both kinds of PPP model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the problem of selecting users in an online social network for targeted advertising so as to maximize the adoption of a given product. In previous work, two families of models have been considered to address this problem: direct targeting and network-based targeting. The former approach targets users with the highest propensity to adopt the product, while the latter approach targets users with the highest influence potential – that is users whose adoption is most likely to be followed by subsequent adoptions by peers. This paper proposes a hybrid approach that combines a notion of propensity and a notion of influence into a single utility function. We show that targeting a fixed number of high-utility users results in more adoptions than targeting either highly influential users or users with high propensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changing the topology of a railway network can greatly affect its capacity. Railway networks however can be altered in a multitude of different ways. As each way has significant immediate and long term financial ramifications, it is a difficult task to decide how and where to expand the network. In response some railway capacity expansion models (RCEM) have been developed to help capacity planning activities, and to remove physical bottlenecks in the current railway system. The exact purpose of these models is to decide given a fixed budget, where track duplications and track sub divisions should be made, in order to increase theoretical capacity most. These models are high level and strategic, and this is why increases to the theoretical capacity is concentrated upon. The optimization models have been applied to a case study to demonstrate their application and their worth. The case study evidently shows how automated approaches of this nature could be a formidable alternative to current manual planning techniques and simulation. If the exact effect of track duplications and sub-divisions can be sufficiently approximated, this approach will be very applicable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffusive transport is a universal phenomenon, throughout both biological and physical sciences, and models of diffusion are routinely used to interrogate diffusion-driven processes. However, most models neglect to take into account the role of volume exclusion, which can significantly alter diffusive transport, particularly within biological systems where the diffusing particles might occupy a significant fraction of the available space. In this work we use a random walk approach to provide a means to reconcile models that incorporate crowding effects on different spatial scales. Our work demonstrates that coarse-grained models incorporating simplified descriptions of excluded volume can be used in many circumstances, but that care must be taken in pushing the coarse-graining process too far.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives To review models of care for older adults with cancer, with a focus on the role of the oncology nurse in geriatric oncology care. International exemplars of geriatric oncology nursing care are discussed. Data source Published peer reviewed literature, web-based resources, professional society materials, and the authors' experience. Conclusion Nursing care for older patients with cancer is complex and requires integrating knowledge from multiple disciplines that blends the sciences of geriatrics, oncology, and nursing. and which recognizes the dimensions of quality of life. Implications for Nursing Practice: Oncology nurses can benefit from learning key skills of comprehensive geriatric screening and assessment to improve the care they provide for older adults with cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Role models incite admiration and provide inspiration, contributing to learning as students aspire to emulate their example. The attributes of physician role models for medical trainees are well documented, but they remain largely unexplored in the context of veterinary medical training. The aim of the current study was to describe the attributes that final-year veterinary students (N=213) at the University of Queensland identified when reflecting on their clinical role models. Clinical role model descriptions provided by students were analyzed using concept-mapping software (Leximancer v. 2.25). The most frequent and highly connected concepts used by students when describing their role model(s) included clients, vet, and animal. Role models were described as good communicators who were skilled at managing relationships with clients, patients, and staff. They had exemplary knowledge, skills, and abilities, and they were methodical and conducted well-structured consultations. They were well respected and, in turn, demonstrated respect for clients, colleagues, staff, and students alike. They were also good teachers and able to tailor explanations to suit both clients and students. Findings from this study may serve to assist with faculty development and as a basis for further research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.