973 resultados para Movement models
Models as epistemic artefacts: Toward a non-representationalist account of scientific representation
Resumo:
description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate unconditional skewness. We consider modeling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional distribution exhibits skewness and nonzero third-order autocovariance structure. In this respect, an asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible, explicit analytical expressions provided for all third-order moments and cross-moments. Finally, we introduce a new tool, the shock impact curve, for investigating the impact of shocks on the conditional mean squared error of return series.
Resumo:
Climate matching software (CLIMEX) was used to prioritise areas to explore for biological control agents in the native range of cat's claw creeper Macfadyena unguis-cati (Bignoniaceae), and to prioritise areas to release the agents in the introduced ranges of the plant. The native distribution of cat's claw creeper was used to predict the potential range of climatically suitable habitats for cat's claw creeper in its introduced ranges. A Composite Match Index (CMI) of cat's claw creeper was determined with the 'Match Climates' function in order to match the ranges in Australia and South Africa where the plant is introduced with its native range in South and Central America. This information was used to determine which areas might yield climatically-adapted agents. Locations in northern Argentina had CMI values which best matched sites with cat's claw creeper infestations in Australia and South Africa. None of the sites from where three currently prioritised biological control agents for cat's claw creeper were collected had CMI values higher than 0.8. The analysis showed that central and eastern Argentina, south Brazil, Uruguay and parts of Bolivia and Paraguay should be prioritised for exploration for new biological control agents for cat's claw creeper to be used in Australia and South Africa.
Resumo:
Quantifying the potential spread and density of an invading organism enables decision-makers to determine the most appropriate response to incursions. We present two linked models that estimate the spread of Solenopsis invicta Buren (red imported fire ant) in Australia based on limited data gathered after its discovery in Brisbane in 2001. A stochastic cellular automaton determines spread within a location (100 km by 100 km) and this is coupled with a model that simulates human-mediated movement of S. invicta to new locations. In the absence of any control measures, the models predict that S. invicta could cover 763 000–4 066 000 km2 by the year 2035 and be found at 200 separate locations around Australia by 2017–2027, depending on the rate of spread. These estimated rates of expansion (assuming no control efforts were in place) are higher than those experienced in the USA in the 1940s during the early invasion phases in that country. Active control efforts and quarantine controls in the USA (including a concerted eradication attempt in the 1960s) may have slowed spread. Further, milder winters, the presence of the polygynous social form, increased trade and human mobility in Australia in 2000s compared with the USA in 1940s could contribute to faster range expansion.
Resumo:
The problem of identifying parameters of time invariant linear dynamical systems with fractional derivative damping models, based on a spatially incomplete set of measured frequency response functions and experimentally determined eigensolutions, is considered. Methods based on inverse sensitivity analysis of damped eigensolutions and frequency response functions are developed. It is shown that the eigensensitivity method requires the development of derivatives of solutions of an asymmetric generalized eigenvalue problem. Both the first and second order inverse sensitivity analyses are considered. The study demonstrates the successful performance of the identification algorithms developed based on synthetic data on one, two and a 33 degrees of freedom vibrating systems with fractional dampers. Limited studies have also been conducted by combining finite element modeling with experimental data on accelerances measured in laboratory conditions on a system consisting of two steel beams rigidly joined together by a rubber hose. The method based on sensitivity of frequency response functions is shown to be more efficient than the eigensensitivity based method in identifying system parameters, especially for large scale systems.
Resumo:
Six species of line-caught coral reef fish (Plectropomus spp., Lethrinus miniatus, Lethrinus laticaudis, Lutjanus sebae, Lutjanus malabaricus and Lutjanus erythropterus) were tagged by members of the Australian National Sportsfishing Association (ANSA) in Queensland between 1986 and 2003. Of the 14,757 fish tagged, 1607 were recaptured and we analysed these data to describe movement and determine factors likely to impact release survival. All species were classified as residents since over 80% of recaptures for each species occurred within 1 km of the release site. Few individuals (range 0.8-5%) were recaptured more than 20 km from their release point. L. sebae had a higher recapture rate (19.9%) than the other species studied (range 2.1-11.7%). Venting swimbladder gases, regardless of whether or not fish appeared to be suffering from barotrauma, significantly enhanced (P < 0.05) the survival of L. sebae and L. malabaricus but had no significant effect (P > 0.05) on L. erythropterus. The condition of fish on release, subjectively assessed by anglers, was only a significant effect on recapture rate for L. sebae where fish in "fair" condition had less than half the recapture rate of those assessed as in "excellent" or "good" condition. The recapture rate of L. sebae and L. laticaudis was significantly (P < 0.05) affected by depth with recapture rate declining in depths exceeding 30 m. Overall, the results showed that depth of capture, release condition and treatment for barotrauma influenced recapture rate for some species but these effects were not consistent across all species studied. Recommendations were made to the ANSA tagging clubs to record additional information such as injury, hooking location and hook type to enable a more comprehensive future assessment of the factors influencing release survival.
Resumo:
Movement of tephritid flies underpins their survival, reproduction, and ability to establish in new areas and is thus of importance when designing effective management strategies. Much of the knowledge currently available on tephritid movement throughout landscapes comes from the use of direct or indirect methods that rely on the trapping of individuals. Here, we review published experimental designs and methods from mark-release-recapture (MRR) studies, as well as other methods, that have been used to estimate movement of the four major tephritid pest genera (Bactrocera, Ceratitis, Anastrepha, and Rhagoletis). In doing so, we aim to illustrate the theoretical and practical considerations needed to study tephritid movement. MRR studies make use of traps to directly estimate the distance that tephritid species can move within a generation and to evaluate the ecological and physiological factors that influence dispersal patterns. MRR studies, however, require careful planning to ensure that the results obtained are not biased by the methods employed, including marking methods, trap properties, trap spacing, and spatial extent of the trapping array. Despite these obstacles, MRR remains a powerful tool for determining tephritid movement, with data particularly required for understudied species that affect developing countries. To ensure that future MRR studies are successful, we suggest that site selection be carefully considered and sufficient resources be allocated to achieve optimal spacing and placement of traps in line with the stated aims of each study. An alternative to MRR is to make use of indirect methods for determining movement, or more correctly, gene flow, which have become widely available with the development of molecular tools. Key to these methods is the trapping and sequencing of a suitable number of individuals to represent the genetic diversity of the sampled population and investigate population structuring using nuclear genomic markers or non-recombinant mitochondrial DNA markers. Microsatellites are currently the preferred marker for detecting recent population displacement and provide genetic information that may be used in assignment tests for the direct determination of contemporary movement. Neither MRR nor molecular methods, however, are able to monitor fine-scale movements of individual flies. Recent developments in the miniaturization of electronics offer the tantalising possibility to track individual movements of insects using harmonic radar. Computer vision and radio frequency identification tags may also permit the tracking of fine-scale movements by tephritid flies by automated resampling, although these methods come with the same problems as traditional traps used in MRR studies. Although all methods described in this chapter have limitations, a better understanding of tephritid movement far outweighs the drawbacks of the individual methods because of the need for this information to manage tephritid populations.
Resumo:
Knowledge of drag force is an important design parameter in aerodynamics. Measurement of aerodynamic forces at hypersonic speed is a challenge and usually ground test facilities like shock tunnels are used to carry out such tests. Accelerometer based force balances are commonly employed for measuring aerodynamic drag around bodies in hypersonic shock tunnels. In this study, we present an analysis of the effect of model material on the performance of an accelerometer balance used for measurement of drag in impulse facilities. From the experimental studies performed on models constructed out of Bakelite HYLEM and Aluminum, it is clear that the rigid body assumption does not hold good during the short testing duration available in shock tunnels. This is notwithstanding the fact that the rubber bush used for supporting the model allows unconstrained motion of the model during the short testing time available in the shock tunnel. The vibrations induced in the model on impact loading in the shock tunnel are damped out in metallic model, resulting in a smooth acceleration signal, while the signal become noisy and non-linear when we use non-isotropic materials like Bakelite HYLEM. This also implies that careful analysis and proper data reduction methodologies are necessary for measuring aerodynamic drag for non-metallic models in shock tunnels. The results from the drag measurements carried out using a 60 degrees half angle blunt cone is given in the present analysis.
Resumo:
Public-Private Partnerships (PPP) are established globally as an important mode of procurement and the features of PPP, not least of which the transfer of risk, appeal to governments and particularly in the current economic climate. There are many other advantages of PPP that are claimed as outweighing the costs of PPP and affording Value for Money (VfM) relative to traditionally financed projects or non-PPP. That said, it is the case that we lack comparative whole-life empirical studies of VfM in PPP and non-PPP. Whilst we await this kind of study, the pace and trajectory of PPP seem set to continue and so in the meantime, the virtues of seeking to improve PPP appear incontrovertible. The decision about which projects, or parts of projects, to offer to the market as a PPP and the decision concerning the allocation or sharing risks as part of engagement of the PPP consortium are among the most fundamental decisions that determine whether PPP deliver VfM. The focus in the paper is on latter decision concerning governments’ attitudes towards risk and more specifically, the effect of this decision on the nature of the emergent PPP consortium, or PPP model, including its economic behavior and outcomes. This paper presents an exploration into the extent to which the seemingly incompatible alternatives of risk allocation and risk sharing, represented by the orthodox/conventional PPP model and the heterodox/alliance PPP model respectively, can be reconciled along with suggestions for new research directions to inform this reconciliation. In so doing, an important step is taken towards charting a path by which governments can harness the relative strengths of both kinds of PPP model.
Resumo:
Absenteeism is one of the major problems of Indian industries. It necessitates the employment of more manpower than the jobs require, resulting in the increase of manpower costs, and lowers the efficiency of plant operation through lowered performance and higher rejects. It also causes machine idleness, if extra manpower is not hired, resulting in disrupted work schedules and assignments. Several studies have investigated the causes of absenteeism (Vaid 1967) for example and their remedy and relationship between absenteeism and turnover with a suggested model for diagnosis and treatment (Hawk 1976) However, the production foremen and supervisor will face the operating task of determining how many extra operatives are to be hired in order to stave off the adverse effects of absenteeism on the man-machine system. This paper deals with a class of reserve manpower models based on the reject allowance model familiar in quality control literature. The present study considers, in addition to absenteeism, machine failures and the graded nature of manpower met within production systems and seeks to find optimal reserve manpower through computer simulation.
Resumo:
This paper studies the problem of selecting users in an online social network for targeted advertising so as to maximize the adoption of a given product. In previous work, two families of models have been considered to address this problem: direct targeting and network-based targeting. The former approach targets users with the highest propensity to adopt the product, while the latter approach targets users with the highest influence potential – that is users whose adoption is most likely to be followed by subsequent adoptions by peers. This paper proposes a hybrid approach that combines a notion of propensity and a notion of influence into a single utility function. We show that targeting a fixed number of high-utility users results in more adoptions than targeting either highly influential users or users with high propensity.
Resumo:
Modeling of cultivar x trial effects for multienvironment trials (METs) within a mixed model framework is now common practice in many plant breeding programs. The factor analytic (FA) model is a parsimonious form used to approximate the fully unstructured form of the genetic variance-covariance matrix in the model for MET data. In this study, we demonstrate that the FA model is generally the model of best fit across a range of data sets taken from early generation trials in a breeding program. In addition, we demonstrate the superiority of the FA model in achieving the most common aim of METs, namely the selection of superior genotypes. Selection is achieved using best linear unbiased predictions (BLUPs) of cultivar effects at each environment, considered either individually or as a weighted average across environments. In practice, empirical BLUPs (E-BLUPs) of cultivar effects must be used instead of BLUPs since variance parameters in the model must be estimated rather than assumed known. While the optimal properties of minimum mean squared error of prediction (MSEP) and maximum correlation between true and predicted effects possessed by BLUPs do not hold for E-BLUPs, a simulation study shows that E-BLUPs perform well in terms of MSEP.