994 resultados para 019900 OTHER MATHEMATICAL SCIENCES
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
Fractional differential equations have been increasingly used as a powerful tool to model the non-locality and spatial heterogeneity inherent in many real-world problems. However, a constant challenge faced by researchers in this area is the high computational expense of obtaining numerical solutions of these fractional models, owing to the non-local nature of fractional derivatives. In this paper, we introduce a finite volume scheme with preconditioned Lanczos method as an attractive and high-efficiency approach for solving two-dimensional space-fractional reaction–diffusion equations. The computational heart of this approach is the efficient computation of a matrix-function-vector product f(A)bf(A)b, where A A is the matrix representation of the Laplacian obtained from the finite volume method and is non-symmetric. A key aspect of our proposed approach is that the popular Lanczos method for symmetric matrices is applied to this non-symmetric problem, after a suitable transformation. Furthermore, the convergence of the Lanczos method is greatly improved by incorporating a preconditioner. Our approach is show-cased by solving the fractional Fisher equation including a validation of the solution and an analysis of the behaviour of the model.
Resumo:
Background Adolescent Idiopathic Scoliosis is the most common type of spinal deformity whose aetiology remains unclear. Studies suggest that gravitational forces in the standing position play an important role in scoliosis progression, therefore anthropometric data are required to develop biomechanical models of the deformity. Few studies have analysed the trunk by vertebral level and none have performed investigations of the scoliotic trunk. The aim of this study was to determine the centroid, thickness, volume and estimated mass, for sections of the trunk in Adolescent Idiopathic Scoliosis patients. Methods Existing low-dose Computed Tomography scans were used to estimate vertebral level-by-level torso masses for 20 female Adolescent Idiopathic Scoliosis patients. ImageJ processing software was used to analyse the Computed Tomography images and enable estimation of the segmental torso mass corresponding to each vertebral level. Findings The patients’ mean age was 15.0 (SD 2.7) years with mean major Cobb Angle of 52° (SD 5.9) and mean patient weight of 58.2 (SD 11.6) kg. The magnitude of torso segment mass corresponding to each vertebral level increased by 150% from 0.6kg at T1 to 1.5kg at L5. Similarly, the segmental thickness corresponding to each vertebral level from T1-L5 increased inferiorly from a mean 18.5 (SD 2.2) mm at T1 to 32.8 (SD 3.4) mm at L5. The mean total trunk mass, as a percentage of total body mass, was 27.8 (SD 0.5) % which was close to values reported in previous literature. Interpretation This study provides new anthropometric reference data on segmental (vertebral level-by-level) torso mass in Adolescent Idiopathic Scoliosis patients, useful for biomechanical models of scoliosis progression and treatment.
Resumo:
We introduce a function which measures the number of distinct ways in which a number can be expressed as the sum of Fibonacci numbers. Using a binary table and other devices, we explore the values that can take and reveal some interesting patterns. The article shows how standard spreadsheet functionalities make it possible to reveal quite striking patterns in data, and is intended to be used in the classroom.
Resumo:
We discuss algorithms for combining sequential prediction strategies, a task which can be viewed as a natural generalisation of the concept of universal coding. We describe a graphical language based on Hidden Markov Models for defining prediction strategies, and we provide both existing and new models as examples. The models include efficient, parameterless models for switching between the input strategies over time, including a model for the case where switches tend to occur in clusters, and finally a new model for the scenario where the prediction strategies have a known relationship, and where jumps are typically between strongly related ones. This last model is relevant for coding time series data where parameter drift is expected. As theoretical contributions we introduce an interpolation construction that is useful in the development and analysis of new algorithms, and we establish a new sophisticated lemma for analysing the individual sequence regret of parameterised models.
Resumo:
Follow-the-Leader (FTL) is an intuitive sequential prediction strategy that guarantees constant regret in the stochastic setting, but has poor performance for worst-case data. Other hedging strategies have better worst-case guarantees but may perform much worse than FTL if the data are not maximally adversarial. We introduce the FlipFlop algorithm, which is the first method that provably combines the best of both worlds. As a stepping stone for our analysis, we develop AdaHedge, which is a new way of dynamically tuning the learning rate in Hedge without using the doubling trick. AdaHedge refines a method by Cesa-Bianchi, Mansour, and Stoltz (2007), yielding improved worst-case guarantees. By interleaving AdaHedge and FTL, FlipFlop achieves regret within a constant factor of the FTL regret, without sacrificing AdaHedge’s worst-case guarantees. AdaHedge and FlipFlop do not need to know the range of the losses in advance; moreover, unlike earlier methods, both have the intuitive property that the issued weights are invariant under rescaling and translation of the losses. The losses are also allowed to be negative, in which case they may be interpreted as gains.
Resumo:
In a standard overlapping generations growth model, with a fixed amount of land and endogenous fertility, the competitive economy converges to a steady state with a zero population growth rate and positive consumption per capita. The Malthusian hypothesis is interpreted as a positive statement about the relationship between population growth and consumption per-capita, when production exhibits diminishing returns to labor and there is a fixed amount of land essential for production. Even when individuals care only about the number of their children and not about their children's welfare, the equilibrium is such that they eventually would choose to have only one child for each adult. Hence, if Malthus's "positive check' on population is the result of the response of optimizing agents to competitively determined prices, Malthus's pessimistic conjecture is not necessarily true, even though his other assumptions hold. -from Authors
Resumo:
Objective: To examine the effects of personal and community characteristics, specifically race and rurality, on lengths of state psychiatric hospital and community stays using maximum likelihood survival analysis with a special emphasis on change over a ten year period of time. Data Sources: We used the administrative data of the Virginia Department of Mental Health, Mental Retardation, and Substance Abuse Services (DMHMRSAS) from 1982-1991 and the Area Resources File (ARF). Given these two sources, we constructed a history file for each individual who entered the state psychiatric system over the ten year period. Histories included demographic, treatment, and community characteristics. Study Design: We used a longitudinal, population-based design with maximum likelihood estimation of survival models. We presented a random effects model with unobserved heterogeneity that was independent of observed covariates. The key dependent variables were lengths of inpatient stay and subsequent length of community stay. Explanatory variables measured personal, diagnostic, and community characteristics, as well as controls for calendar time. Data Collection: This study used secondary, administrative, and health planning data. Principal Findings: African-American clients leave the community more quickly than whites. After controlling for other characteristics, however, race does not affect hospital length of stay. Rurality does not affect length of community stays once other personal and community characteristics are controlled for. However, people from rural areas have longer hospital stays even after controlling for personal and community characteristics. The effects of time are significantly smaller than expected. Diagnostic composition effects and a decrease in the rate of first inpatient admissions explain part of this reduced impact of time. We also find strong evidence for the existence of unobserved heterogeneity in both types of stays and adjust for this in our final models. Conclusions: Our results show that information on client characteristics available from inpatient stay records is useful in predicting not only the length of inpatient stay but also the length of the subsequent community stay. This information can be used to target increased discharge planning for those at risk of more rapid readmission to inpatient care. Correlation across observed and unobserved factors affecting length of stay has significant effects on the measurement of relationships between individual factors and lengths of stay. Thus, it is important to control for both observed and unobserved factors in estimation.
Resumo:
A new test of hypothesis for classifying stationary time series based on the bias-adjusted estimators of the fitted autoregressive model is proposed. It is shown theoretically that the proposed test has desirable properties. Simulation results show that when time series are short, the size and power estimates of the proposed test are reasonably good, and thus this test is reliable in discriminating between short-length time series. As the length of the time series increases, the performance of the proposed test improves, but the benefit of bias-adjustment reduces. The proposed hypothesis test is applied to two real data sets: the annual real GDP per capita of six European countries, and quarterly real GDP per capita of five European countries. The application results demonstrate that the proposed test displays reasonably good performance in classifying relatively short time series.
Resumo:
Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.
Resumo:
The purpose of the book is to use Delphi as a vehicle to introduce some fundamental algorithms and to illustrate several mathematical and problem-solving techniques. This book is therefore intended to be more of a reference for problem-solving, with the solution expressed in Delphi. It introduces a somewhat eclectic collection of material, much of which will not be found in a typical book on Pascal or Delphi. Many of the topics have been used by the author over a period of about ten years at Bond University, Australia in various subjects from 1993 to 2003. Much of the work was connected with a data structures subject (second programming course) conducted variously in MODULA-2, Oberon and Delphi, at Bond University, however there is considerable other, more recent material, e.g., a chapter on Sudoku.
Resumo:
In this paper we introduce a new technique to obtain the slow-motion dynamics in nonequilibrium and singularly perturbed problems characterized by multiple scales. Our method is based on a straightforward asymptotic reduction of the order of the governing differential equation and leads to amplitude equations that describe the slowly-varying envelope variation of a uniformly valid asymptotic expansion. This may constitute a simpler and in certain cases a more general approach toward the derivation of asymptotic expansions, compared to other mainstream methods such as the method of Multiple Scales or Matched Asymptotic expansions because of its relation with the Renormalization Group. We illustrate our method with a number of singularly perturbed problems for ordinary and partial differential equations and recover certain results from the literature as special cases. © 2010 - IOS Press and the authors. All rights reserved.
Resumo:
With nine examples, we seek to illustrate the utility of the Renormalization Group approach as a unification of other asymptotic and perturbation methods.
Resumo:
In this article we obtain closed-form solutions for the combined inflation and axial shear of an elastic tube in respect of the compressible Isotropic elastic material introduced by Levinson and Burgess. Several other boundary-value problems are also examined, including the bending of a rectangular block and straightening of a cylindrical sector, both coupled with stretching and shearing, and an axially varying twist deformation. Some of the solutions appear in closed form, others are expressible in terms of elliptic functions.
Resumo:
Until recently, the low-abundance (LA) range of the serum proteome was an unexplored reservoir of diagnostic information. Today it is increasingly appreciated that a diagnostic goldmine of LA biomarkers resides in the blood stream in complexed association with more abundant higher molecular weight carrier proteins such as albumin and immunoglobulins. As we now look to the possibility of harvesting these LA biomarkers more efficiently through engineered nano-scale particles, mathematical approaches are needed in order to reveal the mechanisms by which blood carrier proteins act as molecular 'mops' for LA diagnostic cargo, and the functional relationships between bound LA biomarker concentrations and other variables of interest such as biomarker intravasation and clearance rates and protein half-lives in the bloodstream. Here we show, by simple mathematical modeling, how the relative abundance of large carrier proteins and their longer half-lives in the bloodstream work together to amplify the total blood concentration of these tiny biomarkers. The analysis further suggests that alterations in the production of biomarkers lead to gradual rather than immediate changes in biomarker levels in the blood circulation. The model analysis also points to the characteristics of artificial nano-particles that would render them more efficient harvesters of tumor biomarkers in the circulation, opening up possibilities for the early detection of curable disease, rather than simply better detection of advanced disease.