939 resultados para Sperm DNA Extraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To further investigate the use of DNA repair-enhancing agents for skin cancer prevention, we treated Cdk4R24C/R24C/NrasQ61K mice topically with the T4 endonuclease V DNA repair enzyme (known as Dimericine) immediately prior to neonatal ultraviolet radiation (UVR) exposure, which has a powerful effect in exacerbating melanoma development in the mouse model. Dimericine has been shown to reduce the incidence of basal-cell and squamous cell carcinoma. Unexpectedly, we saw no difference in penetrance or age of onset of melanoma after neonatal UVR between Dimericine-treated and control animals, although the drug reduced DNA damage and cellular proliferation in the skin. Interestingly, epidermal melanocytes removed cyclobutane pyrimidine dimers (CPDs) more efficiently than surrounding keratinocytes. Our study indicates that neonatal UVR-initiated melanomas may be driven by mechanisms other than solely that of a large CPD load and/or their inefficient repair. This is further suggestive of different mechanisms by which UVR may enhance the transformation of keratinocytes and melanocytes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the increasing resolution of remote sensing images, road network can be displayed as continuous and homogeneity regions with a certain width rather than traditional thin lines. Therefore, road network extraction from large scale images refers to reliable road surface detection instead of road line extraction. In this paper, a novel automatic road network detection approach based on the combination of homogram segmentation and mathematical morphology is proposed, which includes three main steps: (i) the image is classified based on homogram segmentation to roughly identify the road network regions; (ii) the morphological opening and closing is employed to fill tiny holes and filter out small road branches; and (iii) the extracted road surface is further thinned by a thinning approach, pruned by a proposed method and finally simplified with Douglas-Peucker algorithm. Lastly, the results from some QuickBird images and aerial photos demonstrate the correctness and efficiency of the proposed process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cell proliferation is a critical and frequently studied feature of molecular biology in cancer research. Therefore, various assays are available using different strategies to measure cell proliferation. Metabolic assays such as AlamarBlue, WST-1, and MTT, which were originally developed to determine cell toxicity, are being used to assess cell numbers. Additionally, proliferative activity can be determined by quantification of DNA content using fluorophores, such as CyQuant and PicoGreen. Referring to data published in high ranking cancer journals, 945 publications applied these assays over the past 14 years to examine the proliferative behaviour of diverse cell types. Within this study, mainly metabolic assays were used to quantify changes in cell growth yet these assays may not accurately reflect cellular proliferation rates due to a miscorrelation of metabolic activity and cell number. Testing this hypothesis, we compared metabolic activity of different cell types, human cancer cells and primary cells, over a time period of 4 days using AlamarBlue and fluorometric assays CyQuant and PicoGreen to determine their DNA content. Our results show certain discrepancies in terms of over-estimation of cell proliferation with respect to the metabolic assay in comparison to DNA binding fluorophores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate road lane information is crucial for advanced vehicle navigation and safety applications. With the increasing of very high resolution (VHR) imagery of astonishing quality provided by digital airborne sources, it will greatly facilitate the data acquisition and also significantly reduce the cost of data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lanes from aerial images with employment of the image analysis procedures. This algorithm starts with constructing the (Digital Surface Model) DSM and true orthophotos from the stereo images. Next, a maximum likelihood clustering algorithm is used to separate road from other ground objects. After the detection of road surface, the road traffic and lane lines are further detected using texture enhancement and morphological operations. Finally, the generated road network is evaluated to test the performance of the proposed approach, in which the datasets provided by Queensland department of Main Roads are used. The experiment result proves the effectiveness of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis critically analyses sperm donation practices from a child-centred perspective. It examines the effects, both personal and social, of disrupting the unity of biological and social relatedness in families affected by donor conception. It examines how disruption is facilitated by a process of mediation which is detailed using a model provided by Sunderland (2002). This model identifies mediating movements - alienation, translation, re-contextualisation and absorption - which help to explain the powerful and dominating material, and social and political processes which occur in biotechnology, or in reproductive technology in this case. The understanding of such movements and mediation of meanings is inspired by the complementary work of Silverstone (1999) and Sunderland. This model allows for a more critical appreciation of the movement of meaning from previously inalienable aspects of life to alienable products through biotechnology (Sunderland, 2002). Once this mediation in donor conception is subjected to critical examination here, it is then approached from different angles of investigation. The thesis posits that two conflicting notions of the self are being applied to fertility-frustrated adults and the offspring of reproductive interventions. Adults using reproductive interventions receive support to maximise their genetic continuity, but in so doing they create and dismiss the corresponding genetic discontinuity produced for the offspring. The offspring’s kinship and identity are then framed through an experimental postmodernist notion, presenting them as social rather than innate constructs. The adults using the reproductive intervention, on the other hand, have their identity and kinship continuity framed and supported as normative, innate, and based on genetic connection. This use of shifting frameworks is presented as unjust and harmful, creating double standards and a corrosion of kinship values, connection and intelligibility between generations; indeed, it is put forward as adult-centric. The analysis of other forms of human kinship dislocation provided by this thesis explores an under-utilised resource which is used to counter the commonly held opinion that any disruption of social and genetic relatedness for donor offspring is insignificant. The experiences of adoption and the stolen generations are used to inform understanding of the personal and social effects of such kinship disruption and potential reunion for donor offspring. These examples, along with laws governing international human rights, further strengthen the appeal here for normative principles and protections based on collective knowledge and standards to be applied to children of reproductive technology. The thesis presents the argument that the framing and regulation of reproductive technology is excessively influenced by industry providers and users. The interests of these parties collide with and corrode any accurate assessments and protections afforded to the children of reproductive technology. The thesis seeks to counter such encroachments and concludes by presenting these protections, frameworks, and human experiences as resources which can help to address the problems created for the offspring of such reproductive interventions, thereby illustrating why these reproductive interventions should be discontinued.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent studies have shown that human papillomavirus (HPV) DNA can be found in circulating blood, including peripheral blood mononuclear cells (PBMCs), sera, plasma, and arterial cord blood. In light of these findings, DNA extracted from PBMCs from healthy blood donors were examined in order to determine how common HPV DNA is in blood of healthy individuals. Blood samples were collected from 180 healthy male blood donors (18-76 years old) through the Australian Red Cross Blood Services. Genomic DNA was extracted and specimens were tested for HPV DNA by PCR using a broad range primer pair. Positive samples were HPV-type determined by cloning and sequencing. HPV DNA was found in 8.3% (15/180) of the blood donors. A wide variety of different HPV types were isolated from the PBMCs; belonging to the cutaneous beta and gamma papillomavirus genera and mucosal alpha papillomaviruses. High-risk HPV types that are linked to cancer development were detected in 1.7% (3/180) of the PBMCs. Blood was also collected from a healthy HPV-positive 44-year-old male on four different occasions in order to determine which blood cell fractions harbor HPV. PBMCs treated with trypsin were negative for HPV, while non-trypsinized PBMCs were HPV-positive. This suggests that the HPV in blood is attached to the outside of blood cells via a protein-containing moiety. HPV was also isolated in the B cells, dendritic cells, NK cells, and neutrophils. To conclude, HPV present in PBMCs could represent a reservoir of virus and a potential new route of transmission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.