16 resultados para Truncation
em Queensland University of Technology - ePrints Archive
Resumo:
Chlamydia pneumoniae is a common human and animal pathogen associated with a wide range of upper and lower respiratory tract infections. In more recent years there has been increasing evidence to suggest a link between C. pneumoniae and chronic diseases in humans, including atherosclerosis, stroke and Alzheimer’s disease. C. pneumoniae human strains show little genetic variation, indicating that the human-derived strain originated from a common ancestor in the recent past. Despite extensive information on the genetics and morphology processes of the human strain, knowledge concerning many other hosts (including marsupials, amphibians, reptiles and equines) remains virtually unexplored. The koala (Phascolarctos cinereus) is a native Australian marsupial under threat due to habitat loss, predation and disease. Koalas are very susceptible to chlamydial infections, most commonly affecting the conjunctiva, urogenital tract and/or respiratory tract. To address this gap in the literature, the present study (i) provides a detailed description of the morphologic and genomic architecture of the C. pneumoniae koala (and human) strain, and shows that the koala strain is microscopically, developmentally and genetically distinct from the C. pneumoniae human strain, and (ii) examines the genetic relationship of geographically diverse C. pneumoniae isolates from human, marsupial, amphibian, reptilian and equine hosts, and identifies two distinct lineages that have arisen from animal-to-human cross species transmissions. Chapter One of this thesis explores the scientific problem and aims of this study, while Chapter Two provides a detailed literature review of the background in this field of work. Chapter Three, the first results chapter, describes the morphology and developmental stages of C. pneumoniae koala isolate LPCoLN, as revealed by fluorescence and transmission electron microscopy. The profile of this isolate, when cultured in HEp-2 human epithelial cells, was quite different to the human AR39 isolate. Koala LPCoLN inclusions were larger; the elementary bodies did not have the characteristic pear-shaped appearance, and the developmental cycle was completed within a shorter period of time (as confirmed by quantitative real-time PCR). These in vitro findings might reflect biological differences between koala LPCoLN and human AR39 in vivo. Chapter Four describes the complete genome sequence of the koala respiratory pathogen, C. pneumoniae LPCoLN. This is the first animal isolate of C. pneumoniae to be fully-sequenced. The genome sequence provides new insights into genomic ‘plasticity’ (organisation), evolution and biology of koala LPCoLN, relative to four complete C. pneumoniae human genomes (AR39, CWL029, J138 and TW183). Koala LPCoLN contains a plasmid that is not shared with any of the human isolates, there is evidence of gene loss in nucleotide salvage pathways, and there are 10 hot spot genomic regions of variation that were previously not identified in the C. pneumoniae human genomes. Sequence (partial-length) from a second, independent, wild koala isolate (EBB) at several gene loci confirmed that the koala LPCoLN isolate was representative of a koala C. pneumoniae strain. The combined sequence data provides evidence that the C. pneumoniae animal (koala LPCoLN) genome is ancestral to the C. pneumoniae human genomes and that human infections may have originated from zoonotic infections. Chapter Five examines key genome components of the five C. pneumoniae genomes in more detail. This analysis reveals genomic features that are shared by and/or contribute to the broad ecological adaptability and evolution of C. pneumoniae. This analysis resulted in the identification of 65 gene sequences for further analysis of intraspecific variation, and revealed some interesting differences, including fragmentation, truncation and gene decay (loss of redundant ancestral traits). This study provides valuable insights into metabolic diversity, adaptation and evolution of C. pneumoniae. Chapter Six utilises a subset of 23 target genes identified from the previous genomic comparisons and makes a significant contribution to our understanding of genetic variability among C. pneumoniae human (11) and animal (6 amphibian, 5 reptilian, 1 equine and 7 marsupial hosts) isolates. It has been shown that the animal isolates are genetically diverse, unlike the human isolates that are virtually clonal. More convincing evidence that C. pneumoniae originated in animals and recently (in the last few hundred thousand years) crossed host species to infect humans is provided in this study. It is proposed that two animal-to-human cross species events have occurred in the context of the results, one evident by the nearly clonal human genotype circulating in the world today, and the other by a more animal-like genotype apparent in Indigenous Australians. Taken together, these data indicate that the C. pneumoniae koala LPCoLN isolate has morphologic and genomic characteristics that are distinct from the human isolates. These differences may affect the survival and activity of the C. pneumoniae koala pathogen in its natural host, in vivo. This study, by utilising the genetic diversity of C. pneumoniae, identified new genetic markers for distinguishing human and animal isolates. However, not all C. pneumoniae isolates were genetically diverse; in fact, several isolates were highly conserved, if not identical in sequence (i.e. Australian marsupials) emphasising that at some stage in the evolution of this pathogen, there has been an adaptation/s to a particular host, providing some stability in the genome. The outcomes of this study by experimental and bioinformatic approaches have significantly enhanced our knowledge of the biology of this pathogen and will advance opportunities for the investigation of novel vaccine targets, antimicrobial therapy, or blocking of pathogenic pathways.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
In their correspondence, He and colleagues question our conclusion of little or no uplift preceding Emeishan volcanism that we reported in our letter1. Debate concerns the nature of the contact between the Maokou limestone and Emeishan volcanics, the depositional environment and volumetric significance of mafic hydromagmatic deposits (MHDs), and evidence for symmetrical domal thinning. MHDs in the Daqiao section are separated from the Maokou limestone by 100 m of subaerial basaltic lavas, but elsewhere MHDs — previously interpreted as basal conglomerates2, 3 — directly overlie the Maokou2, 3. MHDs thus feature strongly in basal sections of the Emeishan lava succession, as also recently shown4 elsewhere in the Emeishan. An irregular surface at the top of the Maokou limestone has been interpreted as an erosional unconformity2, 3, but clastic deposits presented as evidence of this erosion2, 3 are MHDs produced by explosive magma–water interaction1. A clear demonstration that this irregular top surface is an erosional truncation of limestone reef facies (slope/rim, flat, lagoonal) is currently lacking, but is critical because reefs and carbonate platforms show considerable natural relief of tens of metres. The persistent hot, wet climate since the Oligocene has produced well-developed weathering profiles on exposed Palaeozoic marine sedimentary sequences5, but weathering and karst relief of the uppermost Maokou limestone underlying the flood basalts have not been properly documented, nor shown to be of middle Permian age and immediately preceding emplacement of the large igneous province.
Resumo:
Biochemical reactions underlying genetic regulation are often modelled as a continuous-time, discrete-state, Markov process, and the evolution of the associated probability density is described by the so-called chemical master equation (CME). However the CME is typically difficult to solve, since the state-space involved can be very large or even countably infinite. Recently a finite state projection method (FSP) that truncates the state-space was suggested and shown to be effective in an example of a model of the Pap-pili epigenetic switch. However in this example, both the model and the final time at which the solution was computed, were relatively small. Presented here is a Krylov FSP algorithm based on a combination of state-space truncation and inexact matrix-vector product routines. This allows larger-scale models to be studied and solutions for larger final times to be computed in a realistic execution time. Additionally the new method computes the solution at intermediate times at virtually no extra cost, since it is derived from Krylov-type methods for computing matrix exponentials. For the purpose of comparison the new algorithm is applied to the model of the Pap-pili epigenetic switch, where the original FSP was first demonstrated. Also the method is applied to a more sophisticated model of regulated transcription. Numerical results indicate that the new approach is significantly faster and extendable to larger biological models.
Resumo:
Hepatitis C virus (HCV ) core (C) protein is thought to bind to viral RNA before it undergoes oligomerization leading to RNA encapsidation. Details of these events are so far unknown. The 5ʹ-terminal C protein coding sequence that includes an adenine (A)-rich tract is a part of an internal ribosome entry site(IRES). This nucleotide sequence but not the corresponding protein sequence is needed for proper initiation of translation of viral RNA by an IRES-dependent mechanism. In this study, we examined the importance of this sequence for the ability of the C protein to bind to viral RNA. Serially truncated C proteins with deletions from 10 up to 45 N-terminal amino acids were expressed in Escherichia coli, purified and tested for binding to viral RNA by a gel shift assay. The results showed that truncation of the C protein from its N-terminus by more than 10 amino acids abolished almost completely its expression in E. coli. The latter could be restored by adding a tag to the N-terminus of the protein. The tagged proteins truncated by 15 or more amino acids showed an anomalous migration in SDS-PAGE. Truncation by more than 20 amino acids resulted in a complete loss of ability of tagged C protein to bind to viral RNA. These results provide clues to the early events in the C protein - RNA interactions leading to C protein oligomerization, RNA encapsidation and virion assembly.
Resumo:
An advanced rule-based Transit Signal Priority (TSP) control method is presented in this paper. An on-line transit travel time prediction model is the key component of the proposed method, which enables the selection of the most appropriate TSP plans for the prevailing traffic and transit condition. The new method also adopts a priority plan re-development feature that enables modifying or even switching the already implemented priority plan to accommodate changes in the traffic conditions. The proposed method utilizes conventional green extension and red truncation strategies and also two new strategies including green truncation and queue clearance. The new method is evaluated against a typical active TSP strategy and also the base case scenario assuming no TSP control in microsimulation. The evaluation results indicate that the proposed method can produce significant benefits in reducing the bus delay time and improving the service regularity with negligible adverse impacts on the non-transit street traffic.
Resumo:
The steady problem of free surface flow due to a submerged line source is revisited for the case in which the fluid depth is finite and there is a stagnation point on the free surface directly above the source. Both the strength of the source and the fluid speed in the far field are measured by a dimensionless parameter, the Froude number. By applying techniques in exponential asymptotics, it is shown that there is a train of periodic waves on the surface of the fluid with an amplitude which is exponentially small in the limit that the Froude number vanishes. This study clarifies that periodic waves do form for flows due to a source, contrary to a suggestion by Chapman & Vanden-Broeck (2006, J. Fluid Mech., 567, 299--326). The exponentially small nature of the waves means they appear beyond all orders of the original power series expansion; this result explains why attempts at describing these flows using a finite number of terms in an algebraic power series incorrectly predict a flat free surface in the far field.
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
Tobacco plants were transformed with a chimeric transgene comprising sequences encoding β-glucuronidase (GUS) and the satellite RNA (satRNA) of cereal yellow dwarf luteovirus. When transgenic plants were infected with potato leafroll luteovirus (PLRV), which replicated the transgene-derived satRNA to a high level, the satellite sequence of the GUS:Sat transgene became densely methylated. Within the satellite region, all 86 cytosines in the upper strand and 73 of the 75 cytosines in the lower strand were either partially or fully methylated. In contrast, very low levels of DNA methylation were detected in the satellite sequence of the transgene in uninfected plants and in the flanking nonsatellite sequences in both infected and uninfected plants. Substantial amounts of truncated GUS:Sat RNA accumulated in the satRNA-replicating plants, and most of the molecules terminated at nucleotides within the first 60 bp of the satellite sequence. Whereas this RNA truncation was associated with high levels of satRNA replication, it appeared to be independent of the levels of DNA methylation in the satellite sequence, suggesting that it is not caused by methylation. All the sequenced GUS:Sat DNA molecules were hypermethylated in plants with replicating satRNA despite the phloem restriction of the helper PLRV. Also, small, sense and antisense ∼22 nt RNAs, derived from the satRNA, were associated with the replicating satellite. These results suggest that the sequence-specific DNA methylation spread into cells in which no satRNA replication occurred and that this was mediated by the spread of unamplified satRNA and/or its associated 22 nt RNA molecules.
Resumo:
The orphan nuclear receptor liver receptor homologue-1 (LRH-1) has roles in the development, cholesterol and bile acid homeostasis, and steroidogenesis. It also enhances proliferation and cell cycle progression of cancer cells. In breast cancer, LRH-1 expression is associated with invasive breast cancer; positively correlates with ERα status and aromatase activity; and promotes oestrogen-dependent cell proliferation. However, the mechanism of action of LRH-1 in breast cancer epithelial cells is still not clear. By silencing or over-expressing LRH-1 in ER-positive MCF-7 and ER-negative MDA-MB-231 breast cancer cells, we have demonstrated that LRH-1 promotes motility and cell invasiveness. Similar effects were observed in the non-tumourigenic mammary epithelial cell line, MCF-10A. Remodelling of the actin cytoskeleton and E-cadherin cleavage was observed with LRH-1 over-expression, contributing to increased migratory and invasive properties. Additionally, in LRH-1 over-expressing cells, the truncation of the 120 kDa E-cadherin to the inactive 97 kDa form was observed. These post-translational modifications in E-cadherin may be associated with LRH-1-dependent changes to matrix metalloproteinase 9 expression. These findings suggest a new role of LRH-1 in promoting migration and invasion in breast cancer, independent of oestrogen sensitivity. Therefore, LRH-1 may represent a new target for breast cancer therapeutics.
Resumo:
Alignment-free methods, in which shared properties of sub-sequences (e.g. identity or match length) are extracted and used to compute a distance matrix, have recently been explored for phylogenetic inference. However, the scalability and robustness of these methods to key evolutionary processes remain to be investigated. Here, using simulated sequence sets of various sizes in both nucleotides and amino acids, we systematically assess the accuracy of phylogenetic inference using an alignment-free approach, based on D2 statistics, under different evolutionary scenarios. We find that compared to a multiple sequence alignment approach, D2 methods are more robust against among-site rate heterogeneity, compositional biases, genetic rearrangements and insertions/deletions, but are more sensitive to recent sequence divergence and sequence truncation. Across diverse empirical datasets, the alignment-free methods perform well for sequences sharing low divergence, at greater computation speed. Our findings provide strong evidence for the scalability and the potential use of alignment-free methods in large-scale phylogenomics.