901 resultados para Spatial analysis (Statistics) -- Mathematical models
Resumo:
Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.
Resumo:
One of the problems to be solved in attaining the full potentials of hematopoietic stem cell (HSC) applications is the limited availability of the cells. Growing HSCs in a bioreactor offers an alternative solution to this problem. Besides, it also offers the advantages of eliminating labour intensive process as well as the possible contamination involved in the periodic nutrient replenishments in the traditional T-flask stem cell cultivation. In spite of this, the optimization of HSC cultivation in a bioreactor has been barely explored. This manuscript discusses the development of a mathematical model to describe the dynamics in nutrient distribution and cell concentration of an ex vivo HSC cultivation in a microchannel perfusion bioreactor. The model was further used to optimize the cultivation by proposing three alternative feeding strategies in order to prevent the occurrence of nutrient limitation in the bioreactor. The evaluation of these strategies, the periodic step change increase in the inlet oxygen concentration, the periodic step change increase in the media inflow, and the feedback control of media inflow, shows that these strategies can successfully improve the cell yield of the bioreactor. In general, the developed model is useful for the design and optimization of bioreactor operation.
Resumo:
This study aimed to investigate the spatial clustering and dynamic dispersion of dengue incidence in Queensland, Australia. We used Moran's I statistic to assess the spatial autocorrelation of reported dengue cases. Spatial empirical Bayes smoothing estimates were used to display the spatial distribution of dengue in postal areas throughout Queensland. Local indicators of spatial association (LISA) maps and logistic regression models were used to identify spatial clusters and examine the spatio-temporal patterns of the spread of dengue. The results indicate that the spatial distribution of dengue was clustered during each of the three periods of 1993–1996, 1997–2000 and 2001–2004. The high-incidence clusters of dengue were primarily concentrated in the north of Queensland and low-incidence clusters occurred in the south-east of Queensland. The study concludes that the geographical range of notified dengue cases has significantly expanded in Queensland over recent years.
Resumo:
Gulland's [Gulland, J.A., 1965. Estimation of mortality rates. Annex to Arctic Fisheries Working Group Report (meeting in Hamburg, January 1965). ICES. C.M. 1965, Doc. No. 3 (mimeographed)] virtual population analysis (VPA) is commonly used for studying the dynamics of harvested fish populations. However, it necessitates the solving of a nonlinear equation for the instantaneous rate of fishing mortality of the fish in a population. Pope [Pope, J.G., 1972. An investigation of the accuracy of Virtual Population Analysis using cohort analysis. ICNAF Res. Bull. 9, 65-74. Also available in D.H. Cushing (ed.) (1983), Key Papers on Fish Populations, p. 291-301, IRL Press, Oxford, 405 p.] eliminated this necessity in his cohort analysis by approximating its underlying age- and time-dependent population model. His approximation has since become one of the most commonly used age- and time-dependent fish population models in fisheries science. However, some of its properties are not well understood. For example, many assert that it describes the dynamics of a fish population, from which the catch of fish is taken instantaneously in the middle of the year. Such an assertion has never been proven, nor has its implied instantaneous rate of fishing mortality of the fish of a particular age at a particular time been examined, nor has its implied catch equation been derived from a general catch equation. In this paper, we prove this assertion, examine its implied instantaneous rate of fishing mortality of the fish of a particular age at a particular time, derive its implied catch equation from a general catch equation, and comment on how to structure an age- and time-dependent population model to ensure its internal consistency. This work shows that Gulland's (1965) virtual population analysis and Pope's (1972) cohort analysis lie at the opposite end of a continuous spectrum as a general model for a seasonally occurring fishery; Pope's (1972) approximation implies an infinitely large instantaneous rate of fishing mortality of the fish of a particular age at a particular time in a fishing season of zero length; and its implied catch equation has an undefined instantaneous rate of fishing mortality of the fish in a population, but a well-defined cumulative instantaneous rate of fishing mortality of the fish in the population. This work also highlights a need for a more careful treatment of the times of start and end of a fishing season in fish population models.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
Non-Technical Summary Seafood CRC Project 2009/774. Harvest strategy evaluations and co-management for the Moreton Bay Trawl Fishery Principal Investigator: Dr Tony Courtney, Principal Fisheries Biologist Fisheries and Aquaculture, Agri-Science Queensland Department of Agriculture, Fisheries and Forestry Level B1, Ecosciences Precinct, Joe Baker St, Dutton Park, Queensland 4102 Email: tony.courtney@daff.qld.gov.au Project objectives: 1. Review the literature and data (i.e., economic, biological and logbook) relevant to the Moreton Bay trawl fishery. 2. Identify and prioritise management objectives for the Moreton Bay trawl fishery, as identified by the trawl fishers. 3. Undertake an economic analysis of Moreton Bay trawl fishery. 4. Quantify long-term changes to fishing power for the Moreton Bay trawl fishery. 5. Assess priority harvest strategies identified in 2 (above). Present results to, and discuss results with, Moreton Bay Seafood Industry Association (MBSIA), fishers and Fisheries Queensland. Note: Additional, specific objectives for 2 (above) were developed by fishers and the MBSIA after commencement of the project. These are presented in detail in section 5 (below). The project was an initiative of the MBSIA, primarily in response to falling profitability in the Moreton Bay prawn trawl fishery. The analyses were undertaken by a consortium of DAFF, CSIRO and University of Queensland researchers. This report adopted the Australian Standard Fish Names (http://www.fishnames.com.au/). Trends in catch and effort The Moreton Bay otter trawl fishery is a multispecies fishery, with the majority of the catch composed of Greasyback Prawns (Metapenaeus bennettae), Brown Tiger Prawns (Penaeus esculentus), Eastern King Prawns (Melicertus plebejus), squid (Uroteuthis spp., Sepioteuthis spp.), Banana Prawns (Fenneropenaeus merguiensis), Endeavour Prawns (Metapenaeus ensis, Metapenaeus endeavouri) and Moreton Bay bugs (Thenus parindicus). Other commercially important byproduct includes blue swimmer crabs (Portunus armatus), three-spot crabs (Portunus sanguinolentus), cuttlefish (Sepia spp.) and mantis shrimp (Oratosquilla spp.). Logbook catch and effort data show that total annual reported catch of prawns from the Moreton Bay otter trawl fishery has declined to 315 t in 2008 from a maximum of 901 t in 1990. The number of active licensed vessels participating in the fishery has also declined from 207 in 1991 to 57 in 2010. Similarly, fishing effort has fallen from a peak of 13,312 boat-days in 1999 to 3817 boat-days in 2008 – a 71% reduction. The declines in catch and effort are largely attributed to reduced profitability in the fishery due to increased operational costs and depressed prawn prices. The low prawn prices appear to be attributed to Australian aquacultured prawns and imported aquacultured vannamei prawns, displacing the markets for trawl-caught prawns, especially small species such as Greasyback Prawns which traditionally dominated landings in Moreton Bay. In recent years, the relatively high Australian dollar has resulted in reduced exports of Australian wild-caught prawns. This has increased supply on the domestic market which has also suppressed price increases. Since 2002, Brown Tiger Prawns have dominated annual reported landings in the Moreton Bay fishery. While total catch and effort in the bay have declined to historically low levels, the annual catch and catch rates of Brown Tiger Prawns have been at record highs in recent years. This appears to be at least partially attributed to the tiger prawn stock having recovered from excessive effort in previous decades. The total annual value of the Moreton Bay trawl fishery catch, including byproduct, is about $5 million, of which Brown Tiger Prawns account for about $2 million. Eastern King Prawns make up about 10% of the catch and are mainly caught in the bay from October to December as they migrate to offshore waters outside the bay where they contribute to a large mono-specific trawl fishery. Some of the Eastern King Prawns harvested in Moreton Bay may be growth overfished (i.e., caught below the size required to maximise yield or value), although the optimum size-at-capture was not determined in this study. Banana Prawns typically make up about 5% of the catch, but can exceed 20%, particularly following heavy rainfall. Economic analysis of the fishery From the economic survey, cash profits were, on average, positive for both fleet segments in both years of the survey. However, after the opportunity cost of capital and depreciation were taken into account, the residual owner-operator income was relatively low, and substantially lower than the average share of revenue paid to employed skippers. Consequently, owner-operators were earning less than their opportunity cost of their labour, suggesting that the fleets were economically unviable in the longer term. The M2 licensed fleet were, on average, earning similar boat cash profits as the T1/M1 fleet, although after the higher capital costs were accounted for the T1/M1 boats were earning substantially lower returns to owner-operator labour. The mean technical efficiency for the fleet as a whole was estimated to be 0.67. That is, on average, the boats were only catching 67 per cent of what was possible given their level of inputs (hours fished and hull units). Almost one-quarter of observations had efficiency scores above 0.8, suggesting a substantial proportion of the fleet are relatively efficient, but some are also relatively inefficient. Both fleets had similar efficiency distributions, with median technical efficiency score of 0.71 and 0.67 for the M2 and T1/M1 boats respectively. These scores are reasonably consistent with other studies of prawn trawl fleets in Australia, although higher average efficiency scores were found in the NSW prawn trawl fleet. From the inefficiency model, several factors were found to significantly influence vessel efficiency. These included the number of years of experience as skipper, the number of generations that the skipper’s family had been fishing and the number of years schooling. Skippers with more schooling were significantly more efficient than skippers with lower levels of schooling, consistent with other studies. Skippers who had been fishing longer were, in fact, less efficient than newer skippers. However, this was mitigated in the case of skippers whose family had been involved in fishing for several generations, consistent with other studies and suggesting that skill was passed through by families over successive generations. Both the linear and log-linear regression models of total fishing effort against the marginal profit per hour performed reasonably well, explaining between 70 and 84 per cent of the variation in fishing effort. As the models had different dependent variables (one logged and the other not logged) this is not a good basis for model choice. A better comparator is the square root of the mean square error (SMSE) expressed as a percentage of the mean total effort. On this criterion, both models performed very similarly. The linear model suggests that each additional dollar of average profits per hour in the fishery increases total effort by around 26 hours each month. From the log linear model, each percentage increase in profits per hour increases total fishing effort by 0.13 per cent. Both models indicate that economic performance is a key driver of fishing effort in the fishery. The effect of removing the boat-replacement policy is to increase individual vessel profitability, catch and effort, but the overall increase in catch is less than that removed by the boats that must exit the fishery. That is, the smaller fleet (in terms of boat numbers) is more profitable but the overall catch is not expected to be greater than before. This assumes, however, that active boats are removed, and that these were also taking an average level of catch. If inactive boats are removed, then catch of the remaining group as a whole could increase by between 14 and 17 per cent depending on the degree to which costs are reduced with the new boats. This is still substantially lower than historical levels of catch by the fleet. Fishing power analyses An analysis of logbook data from 1988 to 2010, and survey information on fishing gear, was performed to estimate the long-term variation in the fleet’s ability to catch prawns (known as fishing power) and to derive abundance estimates of the three most commercially important prawn species (i.e., Brown Tiger, Eastern King and Greasyback Prawns). Generalised linear models were used to explain the variation in catch as a function of effort (i.e., hours fished per day), vessel and gear characteristics, onboard technologies, population abundance and environmental factors. This analysis estimated that fishing power associated with Brown Tiger and Eastern King Prawns increased over the past 20 years by 10–30% and declined by approximately 10% for greasybacks. The density of tiger prawns was estimated to have almost tripled from around 0.5 kg per hectare in 1988 to 1.5 kg/ha in 2010. The density of Eastern King Prawns was estimated to have fluctuated between 1 and 2 kg per hectare over this time period, without any noticeable overall trend, while Greasyback Prawn densities were estimated to have fluctuated between 2 and 6 kg per hectare, also without any distinctive trend. A model of tiger prawn catches was developed to evaluate the impact of fishing on prawn survival rates in Moreton Bay. The model was fitted to logbook data using the maximum-likelihood method to provide estimates of the natural mortality rate (0.038 and 0.062 per week) and catchability (which can be defined as the proportion of the fished population that is removed by one unit of effort, in this case, estimated to be 2.5 ± 0.4 E-04 per boat-day). This approach provided a method for industry and scientists to develop together a realistic model of the dynamics of the fishery. Several aspects need to be developed further to make this model acceptable to industry. Firstly, there is considerable evidence to suggest that temperature influences prawn catchability. This ecological effect should be incorporated before developing meaningful harvest strategies. Secondly, total effort has to be allocated between each species. Such allocation of effort could be included in the model by estimating several catchability coefficients. Nevertheless, the work presented in this report is a stepping stone towards estimating essential fishery parameters and developing representative mathematical models required to evaluate harvest strategies. Developing a method that allowed an effective discussion between industry, management and scientists took longer than anticipated. As a result, harvest strategy evaluations were preliminary and only included the most valuable species in the fishery, Brown Tiger Prawns. Additional analyses and data collection, including information on catch composition from field sampling, migration rates and recruitment, would improve the modelling. Harvest strategy evaluations As the harvest strategy evaluations are preliminary, the following results should not be adopted for management purposes until more thorough evaluations are performed. The effects, of closing the fishery for one calendar month, on the annual catch and value of Brown Tiger Prawns were investigated. Each of the 12 months (i.e., January to December) was evaluated. The results were compared against historical records to determine the magnitude of gain or loss associated with the closure. Uncertainty regarding the trawl selectivity was addressed using two selectivity curves, one with a weight at 50% selection (S50%) of 7 g, based on research data, and a second with S50% of 14 g, put forward by industry. In both cases, it was concluded that any monthly closure after February would not be beneficial to the industry. The magnitude of the benefit of closing the fishery in either January or February was sensitive to which mesh selectivity curve that was assumed, with greater benefit achieved when the smaller selectivity curve (i.e., S50% = 7 g) was assumed. Using the smaller selectivity (S50% = 7 g), the expected increase in catch value was 10–20% which equates to $200,000 to $400,000 annually, while the larger selectivity curve (S50% = 14 g) suggested catch value would be improved by 5–10%, or $100,000 to $200,000. The harvest strategy evaluations showed that greater benefits, in the order of 30–60% increases in the tiger annual catch value, could have been obtained by closing the fishery early in the year when annual effort levels were high (i.e., > 10,000 boat-days). In recent years, as effort levels have declined (i.e., ~4000 boat-days annually), expected benefits from such closures are more modest. In essence, temporal closures offer greater benefit when fishing mortality rates are high. A spatial analysis of Brown Tiger Prawn catch and effort was also undertaken to obtain a better understanding of the prawn population dynamics. This indicated that, to improve profitability of the fishery, fishers could consider closing the fishery in the period from June to October, which is already a period of low profitability. This would protect the Brown Tiger Prawn spawning stock, increase catch rates of all species in the lucrative pre-Christmas period (November–December), and provide fishers with time to do vessel maintenance, arrange markets for the next season’s harvest, and, if they wish, work at other jobs. The analysis found that the instantaneous rate of total mortality (Z) for the March–June period did not vary significantly over the last two decades. As the Brown Tiger Prawn population in Moreton Bay has clearly increased over this time period, an interesting conclusion is that the instantaneous rate of natural mortality (M) must have increased, suggesting that tiger prawn natural mortality may be density-dependent at this time of year. Mortality rates of tiger prawns for June–October were found to have decreased over the last two decades, which has probably had a positive effect on spawning stocks in the October–November spawning period. Abiotic effects on the prawns The influence of air temperature, rainfall, freshwater flow, the southern oscillation index (SOI) and lunar phase on the catch rates of the four main prawn species were investigated. The analyses were based on over 200,000 daily logbook catch records over 23 years (i.e., 1988–2010). Freshwater flow was more influential than rainfall and SOI, and of the various sources of flow, the Brisbane River has the greatest volume and influence on Moreton Bay prawn catches. A number of time-lags were also considered. Flow in the preceding month prior to catch (i.e., 30 days prior, Logflow1_30) and two months prior (31–60 days prior, Logflow31_60) had strong positive effects on Banana Prawn catch rates. Average air temperature in the preceding 4-6 months (Temp121_180) also had a large positive effect on Banana Prawn catch rates. Flow in the month immediately preceding catch (Logflow1_30) had a strong positive influence on Greasyback Prawn catch rates. Air temperature in the preceding two months prior to catch (Temp1_60) had a large positive effect on Brown Tiger Prawn catch rates. No obvious or marked effects were detected for Eastern King Prawns, although interestingly, catch rates declined with increasing air temperature 4–6 months prior to catch. As most Eastern King Prawn catches in Moreton Bay occur in October to December, the results suggest catch rates decline with increasing winter temperatures. In most cases, the prawn catch rates declined with the waxing lunar phase (high luminance/full moon), and increased with the waning moon (low luminance/new moon). The SOI explains little additional variation in prawn catch rates (~ <2%), although its influence was higher for Banana Prawns. Extrapolating findings of the analyses to long-term climate change effects should be interpreted with caution. That said, the results are consistent with likely increases in abundance in the region for the two tropical species, Banana Prawns and Brown Tiger Prawns, as coastal temperatures rise. Conversely, declines in abundance could be expected for the two temperate species, Greasyback and Eastern King Prawns. Corporate management structures An examination of alternative governance systems was requested by the industry at one of the early meetings, particularly systems that may give them greater autonomy in decision making as well as help improve the marketing of their product. Consequently, a review of alternative management systems was undertaken, with a particular focus on the potential for self-management of small fisheries (small in terms of number of participants) and corporate management. The review looks at systems that have been implemented or proposed for other small fisheries internationally, with a particular focus on self-management as well as the potential benefits and challenges for corporate management. This review also highlighted particular opportunities for the Moreton Bay prawn fishery. Corporate management differs from other co-management and even self-management arrangements in that ‘ownership’ of the fishery is devolved to a company in which fishers and government are shareholders. The company manages the fishery as well as coordinates marketing to ensure that the best prices are received and that the catch taken meets the demands of the market. Coordinated harvesting will also result in increased profits, which are returned to fishers in the form of dividends. Corporate management offers many of the potential benefits of an individual quota system without formally implementing such a system. A corporate management model offers an advantage over a self-management model in that it can coordinate both marketing and management to take advantage of this unique geographical advantage. For such a system to be successful, the fishery needs to be relatively small and self- contained. Small in this sense is in terms of number of operators. The Moreton Bay prawn fishery satisfies these key conditions for a successful self-management and potentially corporate management system. The fishery is small both in terms of number of participants and geography. Unlike other fisheries that have progressed down the self-management route, the key market for the product from the Moreton Bay fishery is right at its doorstep. Corporate management also presents a number of challenges. First, it will require changes in the way fishers operate. In particular, the decision on when to fish and what to catch will be taken away from the individual and decided by the collective. Problems will develop if individuals do not join the corporation but continue to fish and market their own product separately. While this may seem an attractive option to fishers who believe they can do better independently, this is likely to be just a short- term advantage with an overall long-run cost to themselves as well as the rest of the industry. There are also a number of other areas that need further consideration, particularly in relation to the allocation of shares, including who should be allocated shares (e.g. just boat owners or also some employed skippers). Similarly, how harvesting activity is to be allocated by the corporation to the fishers. These are largely issues that cannot be answered without substantial consultation with those likely to be affected, and these groups cannot give these issues serious consideration until the point at which they are likely to become a reality. Given the current structure and complexity of the fishery, it is unlikely that such a management structure will be feasible in the short term. However, the fishery is a prime candidate for such a model, and development of such a management structure in the future should be considered as an option for the longer term.
Resumo:
Healthy transparent cornea depends upon the regulation of fluid, nutrient and oxygen transport through the tissue to sustain cell metabolism and other critical processes for normal functioning. This research considers the corneal geometry and investigates oxygen distribution using a two-dimensional Monod kinetic model, showing that previous studies make assumptions that lead to predictions of near-anoxic levels of oxygen tension in the limbal regions of the cornea. It also considers the comparison of experimental spatial and temporal data with the predictions of novel mathematical models with respect to distributed mitotic rates during corneal epithelial wound healing.
Resumo:
Urban growth identification, quantification, knowledge of rate and the trends of growth would help in regional planning for better infrastructure provision in environmentally sound way. This requires analysis of spatial and temporal data, which help in quantifying the trends of growth on spatial scale. Emerging technologies such as Remote Sensing, Geographic Information System (GIS) along with Global Positioning System (GPS) help in this regard. Remote sensing aids in the collection of temporal data and GIS helps in spatial analysis. This paper focuses on the analysis of urban growth pattern in the form of either radial or linear sprawl along the Bangalore - Mysore highway. Various GIS base layers such as builtup areas along the highway, road network, village boundary etc. were generated using collateral data such as the Survey of India toposheet, etc. Further, this analysis was complemented with the computation of Shannon's entropy, which helped in identifying prevalent sprawl zone, rate of growth and in delineating potential sprawl locations. The computation Shannon's entropy helped in delineating regions with dispersed and compact growth. This study reveals that the Bangalore North and South taluks contributed mainly to the sprawl with 559% increase in built-up area over a period of 28 years and high degree of dispersion. The Mysore and Srirangapatna region showed 128% change in built-up area and a high potential for sprawl with slightly high dispersion. The degree of sprawl was found to be directly proportional to the distances from the cities.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.
Resumo:
[1] Evaporative fraction (EF) is a measure of the amount of available energy at the earth surface that is partitioned into latent heat flux. The currently operational thermal sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) on satellite platforms provide data only at 1000 m, which constraints the spatial resolution of EF estimates. A simple model (disaggregation of evaporative fraction (DEFrac)) based on the observed relationship between EF and the normalized difference vegetation index is proposed to spatially disaggregate EF. The DEFrac model was tested with EF estimated from the triangle method using 113 clear sky data sets from the MODIS sensor aboard Terra and Aqua satellites. Validation was done using the data at four micrometeorological tower sites across varied agro-climatic zones possessing different land cover conditions in India using Bowen ratio energy balance method. The root-mean-square error (RMSE) of EF estimated at 1000 m resolution using the triangle method was 0.09 for all the four sites put together. The RMSE of DEFrac disaggregated EF was 0.09 for 250 m resolution. Two models of input disaggregation were also tried with thermal data sharpened using two thermal sharpening models DisTrad and TsHARP. The RMSE of disaggregated EF was 0.14 for both the input disaggregation models for 250 m resolution. Moreover, spatial analysis of disaggregation was performed using Landsat-7 (Enhanced Thematic Mapper) ETM+ data over four grids in India for contrasted seasons. It was observed that the DEFrac model performed better than the input disaggregation models under cropped conditions while they were marginally similar under non-cropped conditions.
Resumo:
Guided by experience and the theoretical development of hydrobiology, it can be considered that the main aim of water quality control should be the establishment of the rates of the self-purification process of water bodies which are capable of maintaining communities in a state of dynamic balance without changing the integrity of the ecosystem. Hence, general approaches in the elaboration of methods for hydrobiological control are based on the following principles: a. the balance of matter and energy in water bodies; b. the integrity of the ecosystem structure and of its separate components at all levels. Ecosystem analysis makes possible a revelation of the whole totality of factors which determine the anthropogenic evolution of a water body. This is necessary for the study of long-term changes in water bodies. The principles of ecosystem analysis of water bodies, together with the creation of their mathematical models, are important because, in future, with the transition of water demanding production into closed cycles of water supply, changes in water bodies will arise in the main through the influence of 'diffuse' pollution (from the atmosphere, with utilisation in transport etc.).
Resumo:
This CD contains summary data of bottlenose dolphins stranded in South Carolina using a Geographical Information System (GIS) and contains two published manuscripts in .pdf files. The intent of this CD is to provide data on bottlenose dolphin strandings in South Carolina to marine mammal researchers and managers. This CD is an accumulation of 14 years of stranding data collected through the collaborations of the National Ocean Service, Center for Coastal Environmental Health and Biomolecular Research (CCEHBR), the South Carolina Department of Natural Resources, and numerous volunteers and veterinarians that comprised the South Carolina Marine Mammal Stranding Network. Spatial and temporal information can be visually represented on maps using GIS. For this CD, maps were created to show relationships of stranding densities with land use, human population density, human interaction with dolphins, high geographical regions of live strandings, and seasonal changes. Point maps were also created to show individual strandings within South Carolina. In summary, spatial analysis revealed higher densities of bottlenose dolphin strandings in Charleston and Beaufort Counties, which consist of urban land with agricultural input. This trend was positively correlated with higher human population levels in these coastal counties as compared with other coastal counties. However, spatial analysis revealed that certain areas within a county may have low human population levels but high stranding density, suggesting that the level of effort to respond to strandings is not necessarily positively correlated with the density of strandings in South Carolina. Temporal analysis revealed a significantly higher density of bottlenose dolphin strandings in the northern portion of the State in the fall, mostly due to an increase of neonate strandings. On a finer geographic scale, seasonal stranding densities may fluctuate depending on the region of interest. Charleston Harbor had the highest density of live bottlenose dolphin strandings compared to the rest of the State. This was due in large part to the number of live dolphin entanglements in the crab pot fishery, the largest source of fishery-related mortality for bottlenose dolphins in South Carolina (Burdett and McFee 2004). Spatial density calculations also revealed that Charleston and Beaufort accounted for the majority of dolphins that were involved with human activities. 1
Resumo:
Iteration is unavoidable in the design process and should be incorporated when planning and managing projects in order to minimize surprises and reduce schedule distortions. However, planning and managing iteration is challenging because the relationships between its causes and effects are complex. Most approaches which use mathematical models to analyze the impact of iteration on the design process focus on a relatively small number of its causes and effects. Therefore, insights derived from these analytical models may not be robust under a broader consideration of potential influencing factors. In this article, we synthesize an explanatory framework which describes the network of causes and effects of iteration identified from the literature, and introduce an analytic approach which combines a task network modeling approach with System Dynamics simulation. Our approach models the network of causes and effects of iteration alongside the process architecture which is required to analyze the impact of iteration on design process performance. We show how this allows managers to assess the impact of changes to process architecture and to management levers which influence iterative behavior, accounting for the fact that these changes can occur simultaneously and can accumulate in non-linear ways. We also discuss how the insights resulting from this analysis can be visualized for easier consumption by project participants not familiar with simulation methods. Copyright © 2010 by ASME.
Resumo:
This paper discusses road damage caused by heavy commercial vehicles. Chapter 1 presents some important terminology and a brief historical review of road construction and vehicle-road interaction, from ancient times to the present day. The main types of vehicle-generated road damage, and the methods that are used by pavement engineers to analyze them are discussed in Chapter 2. Attention is also given to the main features of the response of road surfaces to vehicle loads and mathematical models that have been developed to predict road response. Chapter 3 reviews the effects on road damage of vehicle features which can be studied without consideration of vehicle dynamics. These include gross vehicle weight, axle and tire configurations, tire contact conditions and static load sharing in axle group suspensions. The dynamic tire forces generated by heavy vehicles are examined in Chapter 4. The discussion includes their simulation and measurement, their principal characteristics, the effects of tires and suspension design on dynamic forces, and the potential benefits of using advanced suspensions for minimizing dynamic tire forces. Chapter 5 discusses methods for estimating the effects of dynamic tire forces on road damage. The two main approaches are either to examine the statistics of the forces themselves; or to calculate the response of a pavement model to the forces, and to calculate the resulting wear using a material damage model. The issues involved in assessing vehicles for 'road friendliness' are discussed in Chapter 6. Possible assessment methods include measuring strains in an instrumented pavement traversed by the vehicle, measuring dynamic tire forces, or measuring vehicle parameters such as the 'natural frequency' and 'damping ratio'. Each of these measurements involves different assumptions and analysis methods for converting the results into some measure of road damage. Chapter 7 includes a summary of the main conclusions of the paper and recommendations for tire and suspension design, road design and construction, and for vehicle regulations.