995 resultados para ARTIFICIAL MULTIPLE TETRAPLOID


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Some triploid and tetraploid clones have been identified in the gynogenetic gibel carp, Carassius auratus gibelio Bloch, by karyotypic and cytologic analyses over many years. Further, 5-20% males and karyotypic diversity have been found among their natural and artificial populations. However, the DNA contents and the relation to their ploidy level and chromosome numbers have not been ascertained, and whether normal meiosis occurs in spermatogenesis needs to be determined in the different clones. Methods: The sampled blood cells or sperms were mixed with blood cells from chicken or individual gibel carp and fixed in 70% pre-cooled ethanol overnight at 4degreesC. The mixed cell pellets were washed 2-3 times in 1x phosphate buffered saline and then resuspended in the solution containing 0.5% pepsin and 0.1 M HCl. DNA was stained with propidium iodide solution (40 mug/mL) containing 4 kU/ml RNase. The measurements of DNA contents were performed with Phoenix Flow Systems. Results: Triploid clones A, E, F, and P had almost equal DNA content, but triploid clone D had greater DNA content than did the other four triploid clones. DNA content of clone M (7.01 +/- 0.15 pg/nucleus) was almost equal to the DNA content of clone D (5-38 +/- 0.06 pg/nucleus) plus the DNA content of common carp sperm (1.64 +/- 0.02 pg/nucleus). The DNA contents of sperms from clones A, P, and D were half of their blood cells, suggesting that normal meiosis occurs in spermatogenesis. Conclusions: Flow cytometry is a powerful method to analyze genetic heterogeneity and ploidy level among different gynogenetic clones of polyploid gibel carp. Through this study, four questions have been answered. (a) The DNA content correlation among the five triploid clones and one multiple tetraploid clone was revealed in the gibel carp, and the contents increased with not only the ploidy level but also the chromosome number. (b) Mean DNA content was 0.052 pg in six extra chromosomes of clone D, which was higher than that of each chromosome in clones A, E, F, and P (about 0.032 pg/ chromosome). This means that the six extra chromosomes are larger chromosomes. (c) Normal meiosis occurred during spermatogenesis of the gibel carp, because DNA contents of the sperms from clones A, P, and D were almost half of that in their blood cells. (d) Multiple tetraploid clone M (7.01 +/- 0.15 pg/nucleus) contained the complete genome of clone D (5.38 +/- 0.06 pg/nucleus) and the genome of common carp sperm (1.64 +/- 0.02 pg/nucleus). Cytometry Part A 56A:46-52, 2003. (C) 2003 Wiley-Liss, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]This paper describes in detail a real-time multiple face detection system for video streams. The system adds to the good performance provided by a window shift approach, the combination of different cues available in video streams due to temporal coherence. The results achieved by this combined solution outperform the basic face detector obtaining a 98% success rate for around 27000 images, providing additionally eye detection and a relation between the successive detections in time by means of detection threads.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Nicotiana benthamiana is an allo-tetraploid plant, which can be challenging for de novo transcriptome assemblies due to homeologous and duplicated gene copies. Transcripts generated from such genes can be distinct yet highly similar in sequence, with markedly differing expression levels. This can lead to unassembled, partially assembled or mis-assembled contigs. Due to the different properties of de novo assemblers, no one assembler with any one given parameter space can re-assemble all possible transcripts from a transcriptome. Results In an effort to maximise the diversity and completeness of de novo assembled transcripts, we utilised four de novo transcriptome assemblers, TransAbyss, Trinity, SOAPdenovo-Trans, and Oases, using a range of k-mer sizes and different input RNA-seq read counts. We complemented the parameter space biologically by using RNA from 10 plant tissues. We then combined the output of all assemblies into a large super-set of sequences. Using a method from the EvidentialGene pipeline, the combined assembly was reduced from 9.9 million de novo assembled transcripts to about 235,000 of which about 50,000 were classified as primary. Metrics such as average bit-scores, feature response curves and the ability to distinguish paralogous or homeologous transcripts, indicated that the EvidentialGene processed assembly was of high quality. Of 35 RNA silencing gene transcripts, 34 were identified as assembled to full length, whereas in a previous assembly using only one assembler, 9 of these were partially assembled. Conclusions To achieve a high quality transcriptome, it is advantageous to implement and combine the output from as many different de novo assemblers as possible. We have in essence taking the ‘best’ output from each assembler while minimising sequence redundancy. We have also shown that simultaneous assessment of a variety of metrics, not just focused on contig length, is necessary to gauge the quality of assemblies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For active contour modeling (ACM), we propose a novel self-organizing map (SOM)-based approach, called the batch-SOM (BSOM), that attempts to integrate the advantages of SOM- and snake-based ACMs in order to extract the desired contours from images. We employ feature points, in the form of ail edge-map (as obtained from a standard edge-detection operation), to guide the contour (as in the case of SOM-based ACMs) along with the gradient and intensity variations in a local region to ensure that the contour does not "leak" into the object boundary in case of faulty feature points (weak or broken edges). In contrast with the snake-based ACMs, however, we do not use an explicit energy functional (based on gradient or intensity) for controlling the contour movement. We extend the BSOM to handle extraction of contours of multiple objects, by splitting a single contour into as many subcontours as the objects in the image. The BSOM and its extended version are tested on synthetic binary and gray-level images with both single and multiple objects. We also demonstrate the efficacy of the BSOM on images of objects having both convex and nonconvex boundaries. The results demonstrate the superiority of the BSOM over others. Finally, we analyze the limitations of the BSOM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The prediction of the time and the efficiency of the remediation of contaminated soils using soil vapor extraction remain a difficult challenge to the scientific community and consultants. This work reports the development of multiple linear regression and artificial neural network models to predict the remediation time and efficiency of soil vapor extractions performed in soils contaminated separately with benzene, toluene, ethylbenzene, xylene, trichloroethylene, and perchloroethylene. The results demonstrated that the artificial neural network approach presents better performances when compared with multiple linear regression models. The artificial neural network model allowed an accurate prediction of remediation time and efficiency based on only soil and pollutants characteristics, and consequently allowing a simple and quick previous evaluation of the process viability.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Eight premature infants ventilated for hyaline membrane disease and enrolled in the OSIRIS surfactant trial were studied. Lung mechanics, gas exchange [PaCO2, arterial/alveolar PO2 ratio (a/A ratio)], and ventilator settings were determined 20 minutes before and 20 minutes after the end of Exosurf instillation, and subsequently at 12-24 hour intervals. Respiratory system compliance (Crs) and resistance (Rrs) were measured by means of the single breath occlusion method. After surfactant instillation there were no significant immediate changes in PaCO2 (36 vs. 37 mmHg), a/A ratio (0.23 vs. 0.20), Crs (0.32 vs. 0.31 mL/cm H2O/kg), and Rrs (0.11 vs. 0.16 cmH2O/mL/s) (pooled data of 18 measurement pairs). During the clinical course, mean a/A ratio improved significantly each time from 0.17 (time 0) to 0.29 (time 12-13 hours), to 0.39 (time 24-36 hours) and to 0.60 (time 48-61 hours), although mean airway pressure was reduced substantially. Mean Crs increased significantly from 0.28 mL/cmH2O/kg (time 0) to 0.38 (time 12-13 hours), to 0.37 (time 24-38 hours), and to 0.52 (time 48-61 hours), whereas mean Rrs increased from 0.10 cm H2O/mL/s (time 0) to 0.11 (time 12-13 hours), to 0.13 (time 24-36 hours) and to (time 48-61 hours) with no overall significance. A highly significant correlation was found between Crs and a/A ratio (r = 0.698, P less than 0.001). We conclude that Exosurf does not induce immediate changes in oxygenation as does the instillation of (modified) natural surfactant preparations. However, after 12 and 24 hours of treatment oxygenation and Crs improve significantly.(ABSTRACT TRUNCATED AT 250 WORDS)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion’s dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.