918 resultados para GENERATION MEANS ANALYSIS
Resumo:
LIght Detection And Ranging (LIDAR) data for terrain and land surveying has contributed to many environmental, engineering and civil applications. However, the analysis of Digital Surface Models (DSMs) from complex LIDAR data is still challenging. Commonly, the first task to investigate LIDAR data point clouds is to separate ground and object points as a preparatory step for further object classification. In this paper, the authors present a novel unsupervised segmentation algorithm-skewness balancing to separate object and ground points efficiently from high resolution LIDAR point clouds by exploiting statistical moments. The results presented in this paper have shown its robustness and its potential for commercial applications.
Resumo:
The potential of the τ-ω model for retrieving the volumetric moisture content of bare and vegetated soil from dual polarisation passive microwave data acquired at single and multiple angles is tested. Measurement error and several additional sources of uncertainty will affect the theoretical retrieval accuracy. These include uncertainty in the soil temperature, the vegetation structure and consequently its microwave singlescattering albedo, and uncertainty in soil microwave emissivity based on its roughness. To test the effects of these uncertainties for simple homogeneous scenes, we attempt to retrieve soil moisture from a number of simulated microwave brightness temperature datasets generated using the τ-ω model. The uncertainties for each influence are estimated and applied to curves generated for typical scenarios, and an inverse model used to retrieve the soil moisture content, vegetation optical depth and soil temperature. The effect of each influence on the theoretical soil moisture retrieval limit is explored, the likelihood of each sensor configuration meeting user requirements is assessed, and the most effective means of improving moisture retrieval indicated.
Resumo:
We separate and quantify the sources of uncertainty in projections of regional (*2,500 km) precipitation changes for the twenty-first century using the CMIP3 multi-model ensemble, allowing a direct comparison with a similar analysis for regional temperature changes. For decadal means of seasonal mean precipitation, internal variability is the dominant uncertainty for predictions of the first decade everywhere, and for many regions until the third decade ahead. Model uncertainty is generally the dominant source of uncertainty for longer lead times. Scenario uncertainty is found to be small or negligible for all regions and lead times, apart from close to the poles at the end of the century. For the global mean, model uncertainty dominates at all lead times. The signal-to-noise ratio (S/N) of the precipitation projections is highest at the poles but less than 1 almost everywhere else, and is far lower than for temperature projections. In particular, the tropics have the highest S/N for temperature, but the lowest for precipitation. We also estimate a ‘potential S/N’ by assuming that model uncertainty could be reduced to zero, and show that, for regional precipitation, the gains in S/N are fairly modest, especially for predictions of the next few decades. This finding suggests that adaptation decisions will need to be made in the context of high uncertainty concerning regional changes in precipitation. The potential to narrow uncertainty in regional temperature projections is far greater. These conclusions on S/N are for the current generation of models; the real signal may be larger or smaller than the CMIP3 multi-model mean. Also note that the S/N for extreme precipitation, which is more relevant for many climate impacts, may be larger than for the seasonal mean precipitation considered here.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
The identification and characterization of differential gene expression from tissues subjected to stress has gained much attention in plant research. The recognition of elements involved in the response to a particular stress enhances the possibility of promoting crop improvement through direct genetic modification. However, the performance of some of the 'first generation' of transgenic plants with the incorporation of a single gene has not always been as expected. These results have stimulated the development of new transgenic constructions introducing more than one gene and capable of modifying complex pathways. Several techniques are available to conduct the analysis of gene regulation, with such information providing the basis for novel constructs specifically designed to modify metabolism. This review deals with techniques that allow the identification and characterization of differentially-expressed genes and the use of molecular pathway information to produce transgenic plants.
Resumo:
An optimized protocol has been developed for the efficient and rapid genetic modification of sugar beet (Beta vulgaris L.). A polyethylene glycol-mediated DNA transformation technique could be applied to protoplast populations enriched specifically for a single totipotent cell type derived from stomatal guard cells, to achieve high transformation frequencies. Bialaphos resistance, conferred by the pat gene, produced a highly efficient selection system. The majority of plants were obtained within 8 to 9 weeks and were appropriate for plant breeding purposes. All were resistant to glufosinate-ammonium-based herbicides. Detailed genomic characterization has verified transgene integration, and progeny analysis showed Mendelian inheritance.
Resumo:
The wild common bean (Phaseolus vulgaris) is widely but discontinuously distributed from northern Mexico to northern Argentina on both sides of the Isthmus of Panama. Little is known on how the species has reached its current disjunct distribution. In this research, chloroplast DNA polymorphisms in seven non-coding regions were used to study the history of migration of wild P. vulgaris between Mesoamerica and South America. A penalized likelihood analysis was applied to previously published Leguminosae ITS data to estimate divergence times between P. vulgaris and its sister taxa from Mesoamerica, and divergence times of populations within P. vulgaris. Fourteen chloroplast haplotypes were identified by PCR-RFLP and their geographical associations were studied by means of a Nested Clade Analysis and Mantel Tests. The results suggest that the haplotypes are not randomly distributed but occupy discrete parts of the geographic range of the species. The current distribution of haplotypes may be explained by isolation by distance and by at least two migration events between Mesoamerica and South America: one from Mesoamerica to South America and another one from northern South America to Mesoamerica. Age estimates place the divergence of P. vulgaris from its sister taxa from Mesoamerica at or before 1.3 Ma, and divergence of populations from Ecuador-northern Peru at or before 0.6 Ma. As these ages are taken as minimum divergence times, the influence of past events, such as the closure of the Isthmus of Panama and the final uplift of the Andes, on the migration history and population structure of this species cannot be disregarded.
Resumo:
As the ideal method of assessing the nutritive value of a feedstuff, namely offering it to the appropriate class of animal and recording the production response obtained, is neither practical nor cost effective a range of feed evaluation techniques have been developed. Each of these balances some degree of compromise with the practical situation against data generation. However, due to the impact of animal-feed interactions over and above that of feed composition, the target animal remains the ultimate arbitrator of nutritional value. In this review current in vitro feed evaluation techniques are examined according to the degree of animal-feed interaction. Chemical analysis provides absolute values and therefore differs from the majority of in vitro methods that simply rank feeds. However, with no host animal involvement, estimates of nutritional value are inferred by statistical association. In addition given the costs involved, the practical value of many analyses conducted should be reviewed. The in sacco technique has made a substantial contribution to both understanding rumen microbial degradative processes and the rapid evaluation of feeds, especially in developing countries. However, the numerous shortfalls of the technique, common to many in vitro methods, the desire to eliminate the use of surgically modified animals for routine feed evaluation, paralleled with improvements in in vitro techniques, will see this technique increasingly replaced. The majority of in vitro systems use substrate disappearance to assess degradation, however, this provides no information regarding the quantity of derived end-products available to the host animal. As measurement of volatile fatty acids or microbial biomass production greatly increases analytical costs, fermentation gas release, a simple and non-destructive measurement, has been used as an alternative. However, as gas release alone is of little use, gas-based systems, where both degradation and fermentation gas release are measured simultaneously, are attracting considerable interest. Alternative microbial inocula are being considered, as is the potential of using multi-enzyme systems to examine degradation dynamics. It is concluded that while chemical analysis will continue to form an indispensable part of feed evaluation, enhanced use will be made of increasingly complex in vitro systems. It is vital, however, the function and limitations of each methodology are fully understood and that the temptation to over-interpret the data is avoided so as to draw the appropriate conclusions. With careful selection and correct application in vitro systems offer powerful research tools with which to evaluate feedstuffs. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
Inferring the spatial expansion dynamics of invading species from molecular data is notoriously difficult due to the complexity of the processes involved. For these demographic scenarios, genetic data obtained from highly variable markers may be profitably combined with specific sampling schemes and information from other sources using a Bayesian approach. The geographic range of the introduced toad Bufo marinus is still expanding in eastern and northern Australia, in each case from isolates established around 1960. A large amount of demographic and historical information is available on both expansion areas. In each area, samples were collected along a transect representing populations of different ages and genotyped at 10 microsatellite loci. Five demographic models of expansion, differing in the dispersal pattern for migrants and founders and in the number of founders, were considered. Because the demographic history is complex, we used an approximate Bayesian method, based on a rejection-regression algorithm. to formally test the relative likelihoods of the five models of expansion and to infer demographic parameters. A stepwise migration-foundation model with founder events was statistically better supported than other four models in both expansion areas. Posterior distributions supported different dynamics of expansion in the studied areas. Populations in the eastern expansion area have a lower stable effective population size and have been founded by a smaller number of individuals than those in the northern expansion area. Once demographically stabilized, populations exchange a substantial number of effective migrants per generation in both expansion areas, and such exchanges are larger in northern than in eastern Australia. The effective number of migrants appears to be considerably lower than that of founders in both expansion areas. We found our inferences to be relatively robust to various assumptions on marker. demographic, and historical features. The method presented here is the only robust, model-based method available so far, which allows inferring complex population dynamics over a short time scale. It also provides the basis for investigating the interplay between population dynamics, drift, and selection in invasive species.
Resumo:
Human D-2Long (D-2L) and D-2Short (D-2S) dopamine receptor isoforms were modified at their N-terminus by the addition of a human immunodeficiency virus (HIV) or a FLAG epitope tag. The receptors were then expressed in Spodoptera frugiperda 9 (Sf9) cells using the baculovirus system, and their oligomerization was investigated by means of co-immunoprecipitation and time-resolved fluorescence resonance energy transfer (FRET). [H-3] Spiperone labelled D-2 receptors in membranes prepared from Sf9 cells expressing epitope-tagged D-2L or D-2S receptors, with a pK(d) value of approximate to 10. Co-immunoprecipitation using antibodies specific for the tags showed constitutive homo-oligomerization of D-2L and D-2S receptors in Sf9 cells. When the FLAG-tagged D-2S and HIV-tagged D-2L receptors were co-expressed, co-immunoprecipitation showed that the two isoforms can also form hetero-oligomers in Sf9 cells. Time-resolved FRET with europium and XL665-labelled antibodies was applied to whole Sf9 cells and to membranes from Sf9 cells expressing epitope-tagged D-2 receptors. In both cases, constitutive homo-oligomers were revealed for D-2L and D-2S isoforms. Time-resolved FRET also revealed constitutive homo-oligomers in HEK293 cells expressing FLAG-tagged D-2S receptors. The D-2 receptor ligands dopamine, R-(-) propylnorapomorphine, and raclopride did not affect oligomerization of D-2L and D-2S in Sf9 and HEK293 cells. Human D-2 dopamine receptors can therefore form constitutive oligomers in Sf9 cells and in HEK293 cells that can be detected by different approaches, and D-2 oligomerization in these cells is not regulated by ligands.
Resumo:
Phenolic compounds in wastewaters are difficult to treat using the conventional biological techniques such as activated sludge processes because of their bio-toxic and recalcitrant properties and the high volumes released from various chemical, pharmaceutical and other industries. In the current work, a modified heterogeneous advanced Fenton process (AFP) is presented as a novel methodology for the treatment of phenolic wastewater. The modified AFP, which is a combination of hydrodynamic cavitation generated using a liquid whistle reactor and the AFP is a promising technology for wastewaters containing high organic content. The presence of hydrodynamic cavitation in the treatment scheme intensifies the Fenton process by generation of additional free radicals. Also, the turbulence produced during the hydrodynamic cavitation process increases the mass transfer rates as well as providing better contact between the pseudo-catalyst surfaces and the reactants. A multivariate design of experiments has been used to ascertain the influence of hydrogen peroxide dosage and iron catalyst loadings on the oxidation performance of the modified AFP. High er TOC removal rates were achieved with increased concentrations of hydrogen peroxide. In contrast, the effect of catalyst loadings was less important on the TOC removal rate under conditions used in this work although there is an optimum value of this parameter. The concentration of iron species in the reaction solution was measured at 105 min and its relationship with the catalyst loadings and hydrogen peroxide level is presented.
Resumo:
In designing modern office buildings, building spaces are frequently zoned by introducing internal partitioning, which may have a significant influence on the room air environment. This internal partitioning was studied by means of model test, numerical simulation, and statistical analysis as the final stage. In this paper, the results produced from the statistical analysis are summarized and presented.