923 resultados para indices
Resumo:
1 Species-accumulation curves for woody plants were calculated in three tropical forests, based on fully mapped 50-ha plots in wet, old-growth forest in Peninsular Malaysia, in moist, old-growth forest in central Panama, and in dry, previously logged forest in southern India. A total of 610 000 stems were identified to species and mapped to < Im accuracy. Mean species number and stem number were calculated in quadrats as small as 5 m x 5 m to as large as 1000 m x 500 m, for a variety of stem sizes above 10 mm in diameter. Species-area curves were generated by plotting species number as a function of quadrat size; species-individual curves were generated from the same data, but using stem number as the independent variable rather than area. 2 Species-area curves had different forms for stems of different diameters, but species-individual curves were nearly independent of diameter class. With < 10(4) stems, species-individual curves were concave downward on log-log plots, with curves from different forests diverging, but beyond about 104 stems, the log-log curves became nearly linear, with all three sites having a similar slope. This indicates an asymptotic difference in richness between forests: the Malaysian site had 2.7 times as many species as Panama, which in turn was 3.3 times as rich as India. 3 Other details of the species-accumulation relationship were remarkably similar between the three sites. Rectangular quadrats had 5-27% more species than square quadrats of the same area, with longer and narrower quadrats increasingly diverse. Random samples of stems drawn from the entire 50 ha had 10-30% more species than square quadrats with the same number of stems. At both Pasoh and BCI, but not Mudumalai. species richness was slightly higher among intermediate-sized stems (50-100mm in diameter) than in either smaller or larger sizes, These patterns reflect aggregated distributions of individual species, plus weak density-dependent forces that tend to smooth the species abundance distribution and 'loosen' aggregations as stems grow. 4 The results provide support for the view that within each tree community, many species have their abundance and distribution guided more by random drift than deterministic interactions. The drift model predicts that the species-accumulation curve will have a declining slope on a log-log plot, reaching a slope of O.1 in about 50 ha. No other model of community structure can make such a precise prediction. 5 The results demonstrate that diversity studies based on different stem diameters can be compared by sampling identical numbers of stems. Moreover, they indicate that stem counts < 1000 in tropical forests will underestimate the percentage difference in species richness between two diverse sites. Fortunately, standard diversity indices (Fisher's sc, Shannon-Wiener) captured diversity differences in small stem samples more effectively than raw species richness, but both were sample size dependent. Two nonparametric richness estimators (Chao. jackknife) performed poorly, greatly underestimating true species richness.
Resumo:
Studies on melt rheological properties of blends of low density polyethylene (LDPE) with selected grades of linear low density polyethylene (LLDPE), which differ widely in their melt flow indices, are reported, The data obtained in a capillary rheometer are presented to describe the effects of blend composition and shear rate on flow behavior index, melt viscosity, and melt elasticity. In general, blending of LLDPE I that has a low melt flow index (2 g/10 min) with LDPE results in a decrease of its melt viscosity, processing temperature, and the tendency of extrudate distortion, depending on blending ratio. A blending ratio around 20-30% LLDPE I seems optimum from the point of view of desirable improvement in processability behavior. On the other hand, blending of LLDPE II that has a high melt flow index (10 g/10 min) with LDPE offers a distinct advantage in increasing the pseudoplasticity of LDPE/LLDPE II blends.
Resumo:
This paper presents a new approach for assessing power system voltage stability based on artificial feed forward neural network (FFNN). The approach uses real and reactive power, as well as voltage vectors for generators and load buses to train the neural net (NN). The input properties of the NN are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The performance of the trained NN is investigated on two systems under various voltage stability assessment conditions. Main advantage is that the proposed approach is fast, robust, accurate and can be used online for predicting the L-indices of all the power system buses simultaneously. The method can also be effectively used to determining local and global stability margin for further improvement measures.
Resumo:
In this study, the nature of basin-scale hydroclimatic association for Indian subcontinent is investigated. It is found that, the large-scale circulation information from Indian Ocean is also equally important in addition to the El Nino-Southern Oscillation (ENSO), owing to the geographical location of Indian subcontinent. The hydroclimatic association of the variation of monsoon inflow into the Hirakud reservoir in India is investigated using ENSO and EQUatorial INdian Ocean Oscillation (EQUINOO, the atmospheric part of Indian Ocean Dipole mode) as the large-scale circulation information from tropical Pacific Ocean and Indian Ocean regions respectively. Individual associations of ENSO & EQUINOO indices with inflow into Hirakud reservoir are also assessed and found to be weak. However, the association of inflows into Hirakud reservoir with the composite index (CI) of ENSO and EQUINOO is quite strong. Thus, the large-scale circulation information from Indian Ocean is also important apart form the ENSO. The potential of the combined information of ENSO and EQUINOO for predicting the inflows during monsoon is also investigated with promising results. The results of this study will be helpful to water resources managers due to fact that the nature of monsoon inflow is becoming available as an early prediction.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
The Bernoulli/exponential target process is considered. Such processes have been found useful in modelling the search for active compounds in pharmaceutical research. An inequality is presented which improves a result of Gittins (1989), thus providing a better approximation to the Gittins indices which define the optimal search policy.
Resumo:
The effects of fish density distribution and effort distribution on the overall catchability coefficient are examined. Emphasis is also on how aggregation and effort distribution interact to affect overall catch rate [catch per unit effort (cpue)]. In particular, it is proposed to evaluate three indices, the catchability index, the knowledge parameter, and the aggregation index, to describe the effectiveness of targeting and the effects on overall catchability in the stock area. Analytical expressions are provided so that these indices can easily be calculated. The average of the cpue calculated from small units where fishing is random is a better index for measuring the stock abundance. The overall cpue, the ratio of lumped catch and effort, together with the average cpue, can be used to assess the effectiveness of targeting. The proposed methods are applied to the commercial catch and effort data from the Australian northern prawn fishery. The indices are obtained assuming a power law for the effort distribution as an approximation of targeting during the fishing operation. Targeting increased catchability in some areas by 10%, which may have important implications on management advice.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
In the coastal region of central Queensland female red-spot king prawns, P. longistylus, and the western or blue-leg king prawns, P. latisulcatus, had high mean ovary weights and high proportions of advanced ovary development during the winter months of July and August of 1985 and 1986. On the basis of insemination, both species began copulating at the size of 26-27 mm CL, but P. longistylus matured and spawned at a smaller size than P. latisulcatus. Abundance of P. longistylus was generally three to four times greater than that of P. latisulcatus but the latter was subject to greater variation in abundance. Low mean ovary weight and low proportions of females with advanced ovaries were associated with the maximum mean bottom sea-water temperature (28.5ºC) for both species. Population fecundity indices indicated that peaks in yolk or egg production (a) displayed a similar pattern for both species, (b) varied in timing from year to year for both species and (c) were strongly influenced by abundance. Generally, sample estimates of abundance and commercial catch rates (CPUE) showed similar trends. Differences between the two may have been due to changes in targeted commercial effort in this multi-species fishery.
Resumo:
Metapenaeus endeavouri and M. ensis from coastal trawl fishing grounds off central Queensland, Australia, have marked seasonal reproductive cycles. Female M. endeavouri grew to a larger size than female M. ensis and occurred over a wider range of sites and depths. Although M. ensis was geographically restricted in distribution to only the shallowest sites it was highly abundant. Mating activity in these open thelycum species, indicated by the presence or absence of a spermatophore, was relatively low and highly seasonal compared with closed thelyeum shrimps. Seasonal variation in spermatophore insemination can be used as an independent technique to study spawning periodicity in open thelycum shrimps. Data strongly suggest an inshore movement of M. endeavouri to mature and spawn. This differs from most concepts of Penaeus species life cycles, but is consistent with the estuarine significance in the life cycle of Metapenaeus species. Monthly population fecundity indices suggest summer spawning for both species, which contrasts with the winter spawning of other shrimps from the same multispecies fishery.
Resumo:
This paper investigates the stock-recruitment and equilibrium yield dynamics for the two species of tiger prawns (Penaeus esculentus and Penaeus semisulcatus) in Australia's most productive prawn fishery: the Northern Prawn Fishery. Commercial trawl logbooks for 1970-93 and research surveys are used to develop population models for these prawns. A population model that incorporates continuous recruitment is developed. Annual spawning stock and recruitment indices are then estimated from the population model. Spawning stock indices represent the abundance of female prawns that are likely to spawn; recruitment indices represent the abundance of all prawns less than a certain size. The relationships between spawning stock and subsequent recruitment (SRR), between recruitment and subsequent spawning stock (RSR), and between recruitment and commercial catch were estimated through maximum-likelihood models that incorporated autoregressive terms. Yield as a function of fishing effort was estimated by constraining to equilibrium the SRR and RSR. The resulting production model was then used to determine maximum sustainable yield (MSY) and its corresponding fishing effort (f(MSY)). Long-term yield estimates for the two tiger prawn species range between 3700 and 5300 t. The fishing effort at present is close to the level that should produce MSY for both species of tiger prawns. However, current landings, recruitment and spawning stock are below the equilibrium values predicted by the models. This may be because of uncertainty in the spawning stock-recruitment relationships, a change in carrying capacity, biased estimates of fishing effort, unreliable catch statistics, or simplistic assumptions about stock structure. Although our predictions of tiger prawn yields are uncertain, management will soon have to consider new measures to counteract the effects of future increases in fishing effort.
Resumo:
Suppose two treatments with binary responses are available for patients with some disease and that each patient will receive one of the two treatments. In this paper we consider the interests of patients both within and outside a trial using a Bayesian bandit approach and conclude that equal allocation is not appropriate for either group of patients. It is suggested that Gittins indices should be used (using an approach called dynamic discounting by choosing the discount rate based on the number of future patients in the trial) if the disease is rare, and the least failures rule if the disease is common. Some analytical and simulation results are provided.