997 resultados para Scale insects


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid screening tests and an appreciation of the simple genetic control of Alternaria brown spot (ABS) susceptibility have existed for many years, and yet the application of this knowledge to commercial-scale breeding programs has been limited. Detached leaf assays were first demonstrated more than 40 years ago and reliable data suggesting a single gene determining susceptibility has been emerging for at least 20 years. However it is only recently that the requirement for genetic resistance in new hybrids has become a priority, following increased disease prevalence in Australian mandarin production areas previously considered too dry for the pathogen. Almost all of the high-fruit-quality parents developed so far by the Queensland-based breeding program are susceptible to ABS necessitating the screening of their progeny to avoid commercialisation of susceptible hybrids. This is done effectively and efficiently by spraying 3-6 month old hybrid seedlings with a spore suspension derived from a toxin-producing field isolate of Alternaria alternate, then incubating these seedlings in a cool room at 25°C and high humidity for 5 days. Susceptible seedlings show clear disease symptoms and are discarded. Analysis of observed and expected segregation ratios loosely support the hypothesis for a single dominant gene for susceptibility, but do not rule out the possibility of alternative genetic models. After implementing the routine screening for ABS resistance for three seasons we now have more than 20,000 hybrids growing in field progeny blocks that have been screened for resistance to the ABS disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rare opportunity to test hypotheses about potential fishery benefits of large-scale closures was initiated in July 2004 when an additional 28.4% of the 348 000 km2 Great Barrier Reef (GBR) region of Queensland, Australia was closed to all fishing. Advice to the Australian and Queensland governments that supported this initiative predicted these additional closures would generate minimal (10%) initial reductions in both catch and landed value within the GBR area, with recovery of catches becoming apparent after three years. To test these predictions, commercial fisheries data from the GBR area and from the two adjacent (non-GBR) areas of Queensland were compared for the periods immediately before and after the closures were implemented. The observed means for total annual catch and value within the GBR declined from pre-closure (2000–2003) levels of 12 780 Mg and Australian $160 million, to initial post-closure (2005–2008) levels of 8143 Mg and $102 million; decreases of 35% and 36% respectively. Because the reference areas in the non-GBR had minimal changes in catch and value, the beyond-BACI (before, after, control, impact) analyses estimated initial net reductions within the GBR of 35% for both total catch and value. There was no evidence of recovery in total catch levels or any comparative improvement in catch rates within the GBR nine years after implementation. These results are not consistent with the advice to governments that the closures would have minimal initial impacts and rapidly generate benefits to fisheries in the GBR through increased juvenile recruitment and adult spillovers. Instead, the absence of evidence of recovery in catches to date currently supports an alternative hypothesis that where there is already effective fisheries management, the closing of areas to all fishing will generate reductions in overall catches similar to the percentage of the fished area that is closed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The transport of live fish is a crucial step to establish fish culture in captivity, and is especially challenging for species that have not been commonly cultured before, therefore transport and handling methods need to be optimized and tailored. This study describes the use of tuna tubes for small-scale transport of medium-sized pelagic fish from the Scombridae family. Tuna tubes are an array of vertical tubes that hold the fish, while fresh seawater is pumped up the tubes and through the fish mouth and gills, providing oxygen and removing wastes. In this study, 19 fish were captured using rod and line and 42% of the captured fish were transported alive in the custom-designed tuna tubes to an on-shore holding tank: five mackerel tuna (Euthynnus affinis) and three leaping bonito (Cybiosarda elegans). Out of these, just three (15.8% of total fish) acclimatized to the tank's condition. Based on these results, we discuss an improved design of the tuna tubes that has the potential to increase survival rates and enable a simple and low cost method of transporting of live pelagic fish.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural history collections are an invaluable resource housing a wealth of knowledge with a long tradition of contributing to a wide range of fields such as taxonomy, quarantine, conservation and climate change. It is recognized however [Smith and Blagoderov 2012] that such physical collections are often heavily underutilized as a result of the practical issues of accessibility. The digitization of these collections is a step towards removing these access issues, but other hurdles must be addressed before we truly unlock the potential of this knowledge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Worldwide population growth and economic agglomeration is driving increasing urban density within larger metropolitan conurbations. Population growth and housing diversity and affordability issues in Queensland have seen an increasing demand for more diverse and higher density development. Under Queensland’s flexible planning regulatory provisions, a level of ‘medium’ to ‘high density’ is being achieved by a focus on fine-grained urban design, low scale development, lot diversity, and delivery of single dwelling products. This for Queensland (and Australia) has been an unprecedented innovation in urban and dwelling design. Dwellings are being delivered on lots with zero regulatory minimum sizes providing for a range of new products including ‘apartments on the ground’. This paper reviews recent and nascent demonstrations of EDQ’s fine-grained urbanism principles, identifiable with historical ‘vernacular suburbanism’. The paper introduces and defines a concept of a ‘natural density’ linking human scale built form with walkability. The paper challenges the notion that (sub)urban development, outside major city centres, needs to be of a higher scale to achieve density and diversity aspirations. ‘Natural density’ provides a means of achieving the increasing demand for more diverse and higher density development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple method for preparing bulk quantities of tRNA from chick embryo has been developed. In this method chick embryos were homogenized in a buffer of pH 4.5, followed by deproteinization with phenol. The aqueous layer was allowed to separate under gravity. The resulting aqueous layer, after two more phenol treatments, was directly passed through a DEAE-cellulose column and the tRNA eluted therefrom with 1 Image NaCl. The tRNA prepared by this method was as active as the one prepared at neutral pH.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Capercaillie (Tetrao urogallus L.) is often used as a focal species for landscape ecological studies: the minimum size for its lekking area is 300 ha, and the annual home range for an individual may cover 30 80 km2. In Finland, Capercaillie populations have decreased by approximately 40 85%, with the declines likely to have started in the 1940s. Although the declines have partly stabilized from the 1990s onwards, it is obvious that the negative population trend was at least partly caused by changes in human land use. The aim of this thesis was to study the connections between human land use and Capercaillie populations in Finland, using several spatial and temporal scales. First, the effect of forest age structure on Capercaillie population trends was studied in 18 forestry board districts in Finland, during 1965 1988. Second, the abundances of Capercaillie and Moose (Alces alces L.) were compared in terms of several land-use variables on a scale of 50 × 50 km grids and in five regions in Finland. Third, the effects of forest cover and fine-grain forest fragmentation on Capercaillie lekking area persistence were studied in three study locations in Finland, on 1000 and 3000 m spatial scales surrounding the leks. The analyses considering lekking areas were performed with two definitions for forest: > 60 and > 152 m3ha 1 of timber volume. The results show that patterns and processes at large spatial scales strongly influence Capercaillie in Finland. In particular, in southwestern and eastern Finland, high forest cover and low human impact were found to be beneficial for this species. Forest cover (> 60 m3ha 1 of timber) surrounding the lekking sites positively affected lekking area persistence only at the larger landscape scale (3000 m radius). The effects of older forest classes were hard to assess due to scarcity of older forests in several study areas. Young and middle-aged forest classes were common in the vicinity of areas with high Capercaillie abundances especially in northern Finland. The increase in the amount of younger forest classes did not provide a good explanation for Capercaillie population decline in 1965 1988. In addition, there was no significant connection between mature forests (> 152 m3ha 1 of timber) and lekking area persistence in Finland. It seems that in present-day Finnish landscapes, area covered with old forest is either too scarce to efficiently explain the abundance of Capercaillie and the persistence of the lekking areas, or the effect of forest age is only important when considering smaller spatial scales than the ones studied in this thesis. In conclusion, larger spatial scales should be considered for assessing the future Capercaillie management. According to the proposed multi-level planning, the first priority should be to secure the large, regional-scale forest cover, and the second priority should be to maintain fine-grained, heterogeneous structure within the separate forest patches. A management unit covering hundreds of hectares, or even tens or hundreds of square kilometers, should be covered, which requires regional-level land-use planning and co-operation between forest owners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of unsupervised anomaly detection arises in a wide variety of practical applications. While one-class support vector machines have demonstrated their effectiveness as an anomaly detection technique, their ability to model large datasets is limited due to their memory and time complexity for training. To address this issue for supervised learning of kernel machines, there has been growing interest in random projection methods as an alternative to the computationally expensive problems of kernel matrix construction and sup-port vector optimisation. In this paper we leverage the theory of nonlinear random projections and propose the Randomised One-class SVM (R1SVM), which is an efficient and scalable anomaly detection technique that can be trained on large-scale datasets. Our empirical analysis on several real-life and synthetic datasets shows that our randomised 1SVM algorithm achieves comparable or better accuracy to deep auto encoder and traditional kernelised approaches for anomaly detection, while being approximately 100 times faster in training and testing.