929 resultados para Remediation time estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have studied growth and estimated recruitment of massive coral colonies at three sites, Kaledupa, Hoga and Sampela, separated by about 1.5 km in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. There was significantly higher species richness (P<0.05), coral cover (P<0.05) and rugosity (P<0.01) at Kaledupa than at Sampela. A model for coral reef growth has been developed based on a rational polynomial function, where dx/dt is an index of coral growth with time; W is the variable (for example, coral weight, coral length or coral area), up to the power of n in the numerator and m in the denominator; a1……an and b1…bm are constants. The values for n and m represent the degree of the polynomial, and can relate to the morphology of the coral. The model was used to simulate typical coral growth curves, and tested using published data obtained by weighing coral colonies underwater in reefs on the south-west coast of Curaçao [‘Neth. J. Sea Res. 10 (1976) 285’]. The model proved an accurate fit to the data, and parameters were obtained for a number of coral species. Surface area data was obtained on over 1200 massive corals at three different sites in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. The year of an individual's recruitment was calculated from knowledge of the growth rate modified by application of the rational polynomial model. The estimated pattern of recruitment was variable, with little numbers of massive corals settling and growing before 1950 at the heavily used site, Sampela, relative to the reef site with little or no human use, Kaledupa, and the intermediate site, Hoga. There was a significantly greater sedimentation rate at Sampela than at either Kaledupa (P<0.0001) or Hoga (P<0.0005). The relative mean abundance of fish families present at the reef crests at the three sites, determined using digital video photography, did not correlate with sedimentation rates, underwater visibility or lack of large non-branching coral colonies. Radial growth rates of three genera of non-branching corals were significantly lower at Sampela than at Kaledupa or at Hoga, and there was a high correlation (r=0.89) between radial growth rates and underwater visibility. Porites spp. was the most abundant coral over all the sites and at all depths followed by Favites (P<0.04) and Favia spp. (P<0.03). Colony ages of Porites corals were significantly lower at the 5 m reef flat on the Sampela reef than at the same depth on both other reefs (P<0.005). At Sampela, only 2.8% of corals on the 5 m reef crest are of a size to have survived from before 1950. The Scleractinian coral community of Sampela is severely impacted by depositing sediments which can lead to the suffocation of corals, whilst also decreasing light penetration resulting in decreased growth and calcification rates. The net loss of material from Sampela, if not checked, could result in the loss of this protective barrier which would be to the detriment of the sublittoral sand flats and hence the Sampela village.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS). compensation. for block base motion On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduce hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms. Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an improved Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). The hashtable structure of LHMEA is improved compared to the original TPA and LHMEA. The evaluation of the algorithm considers the three important metrics being processing time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a parallel Linear Hashtable Motion Estimation Algorithm (LHMEA). Most parallel video compression algorithms focus on Group of Picture (GOP). Based on LHMEA we proposed earlier [1][2], we developed a parallel motion estimation algorithm focus inside of frame. We divide each reference frames into equally sized regions. These regions are going to be processed in parallel to increase the encoding speed significantly. The theory and practice speed up of parallel LHMEA according to the number of PCs in the cluster are compared and discussed. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass Hexagonal Search (HEXBS) motion estimation, which only searches a small number of Macroblocks (MBs). We evaluated distributed parallel implementation of LHMEA of TPA for real time video compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Channel estimation method is a key issue in MIMO system. In recent years, a lot of papers on subspace(SS)-based blind channel estimation have been published, and in this paper, combining SS method with a space-time coding scheme, we proposed a novel blind channel estimation method in MIMO system. Simulation result demonstrates the effectiveness of this method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for block base motion compensation. On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduced hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms, Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new parameter-estimation algorithm, which minimises the cross-validated prediction error for linear-in-the-parameter models, is proposed, based on stacked regression and an evolutionary algorithm. It is initially shown that cross-validation is very important for prediction in linear-in-the-parameter models using a criterion called the mean dispersion error (MDE). Stacked regression, which can be regarded as a sophisticated type of cross-validation, is then introduced based on an evolutionary algorithm, to produce a new parameter-estimation algorithm, which preserves the parsimony of a concise model structure that is determined using the forward orthogonal least-squares (OLS) algorithm. The PRESS prediction errors are used for cross-validation, and the sunspot and Canadian lynx time series are used to demonstrate the new algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new structure of Radial Basis Function (RBF) neural network called the Dual-orthogonal RBF Network (DRBF) is introduced for nonlinear time series prediction. The hidden nodes of a conventional RBF network compare the Euclidean distance between the network input vector and the centres, and the node responses are radially symmetrical. But in time series prediction where the system input vectors are lagged system outputs, which are usually highly correlated, the Euclidean distance measure may not be appropriate. The DRBF network modifies the distance metric by introducing a classification function which is based on the estimation data set. Training the DRBF networks consists of two stages. Learning the classification related basis functions and the important input nodes, followed by selecting the regressors and learning the weights of the hidden nodes. In both cases, a forward Orthogonal Least Squares (OLS) selection procedure is applied, initially to select the important input nodes and then to select the important centres. Simulation results of single-step and multi-step ahead predictions over a test data set are included to demonstrate the effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of varicella zoster virus (VZV) immunity in healthcare workers without a history of chickenpox is important for identifying those in need of vOka vaccination. Post immunisation, healthcare workers in the UK who work with high risk patients are tested for seroconversion. To assess the performance of the time-resolved fluorescence immunoassay (TRFIA) for the detection of antibody in vaccinated as well as unvaccinated individuals, a cut-off was first calculated. VZV-IgG specific avidity and titres six weeks after the first dose of vaccine were used to identify subjects with pre-existing immunity among a cohort of 110 healthcare workers. Those with high avidity (≥60%) were considered to have previous immunity to VZV and those with low or equivocal avidity (<60%) were considered naive. The former had antibody levels ≥400mIU/mL and latter had levels <400mIU/mL. Comparison of the baseline values of the naive and immune groups allowed the estimation of a TRFIA cut-off value of >130mIU/mL which best discriminated between the two groups and this was confirmed by ROC analysis. Using this value, the sensitivity and specificity of TRFIA cut-off were 90% (95% CI 79-96), and 78% (95% CI 61-90) respectively in this population. A subset of samples tested by the gold standard Fluorescence Antibody to Membrane Antigen (FAMA) test showed 84% (54/64) agreement with TRFIA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Little has been reported on the performance of near-far resistant CDMA detectors in the presence of system parameter estimation errors (SPEEs). Starting with the general mathematical model of matched filters, the paper examines the effects of three classes of SPEEs, i.e., time-delay, carrier phase, and carrier frequency errors, on the performance (BER) of an emerging type of near-far resistant coherent DS/SSMA detector, i.e., the linear decorrelating detector. For comparison, the corresponding results for the conventional detector are also presented. It is shown that the linear decorrelating detector can still maintain a considerable performance advantage over the conventional detector even when some SPEEs exist.