357 resultados para Truncation
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
In their correspondence, He and colleagues question our conclusion of little or no uplift preceding Emeishan volcanism that we reported in our letter1. Debate concerns the nature of the contact between the Maokou limestone and Emeishan volcanics, the depositional environment and volumetric significance of mafic hydromagmatic deposits (MHDs), and evidence for symmetrical domal thinning. MHDs in the Daqiao section are separated from the Maokou limestone by 100 m of subaerial basaltic lavas, but elsewhere MHDs — previously interpreted as basal conglomerates2, 3 — directly overlie the Maokou2, 3. MHDs thus feature strongly in basal sections of the Emeishan lava succession, as also recently shown4 elsewhere in the Emeishan. An irregular surface at the top of the Maokou limestone has been interpreted as an erosional unconformity2, 3, but clastic deposits presented as evidence of this erosion2, 3 are MHDs produced by explosive magma–water interaction1. A clear demonstration that this irregular top surface is an erosional truncation of limestone reef facies (slope/rim, flat, lagoonal) is currently lacking, but is critical because reefs and carbonate platforms show considerable natural relief of tens of metres. The persistent hot, wet climate since the Oligocene has produced well-developed weathering profiles on exposed Palaeozoic marine sedimentary sequences5, but weathering and karst relief of the uppermost Maokou limestone underlying the flood basalts have not been properly documented, nor shown to be of middle Permian age and immediately preceding emplacement of the large igneous province.
Resumo:
Biochemical reactions underlying genetic regulation are often modelled as a continuous-time, discrete-state, Markov process, and the evolution of the associated probability density is described by the so-called chemical master equation (CME). However the CME is typically difficult to solve, since the state-space involved can be very large or even countably infinite. Recently a finite state projection method (FSP) that truncates the state-space was suggested and shown to be effective in an example of a model of the Pap-pili epigenetic switch. However in this example, both the model and the final time at which the solution was computed, were relatively small. Presented here is a Krylov FSP algorithm based on a combination of state-space truncation and inexact matrix-vector product routines. This allows larger-scale models to be studied and solutions for larger final times to be computed in a realistic execution time. Additionally the new method computes the solution at intermediate times at virtually no extra cost, since it is derived from Krylov-type methods for computing matrix exponentials. For the purpose of comparison the new algorithm is applied to the model of the Pap-pili epigenetic switch, where the original FSP was first demonstrated. Also the method is applied to a more sophisticated model of regulated transcription. Numerical results indicate that the new approach is significantly faster and extendable to larger biological models.
Resumo:
Hepatitis C virus (HCV ) core (C) protein is thought to bind to viral RNA before it undergoes oligomerization leading to RNA encapsidation. Details of these events are so far unknown. The 5ʹ-terminal C protein coding sequence that includes an adenine (A)-rich tract is a part of an internal ribosome entry site(IRES). This nucleotide sequence but not the corresponding protein sequence is needed for proper initiation of translation of viral RNA by an IRES-dependent mechanism. In this study, we examined the importance of this sequence for the ability of the C protein to bind to viral RNA. Serially truncated C proteins with deletions from 10 up to 45 N-terminal amino acids were expressed in Escherichia coli, purified and tested for binding to viral RNA by a gel shift assay. The results showed that truncation of the C protein from its N-terminus by more than 10 amino acids abolished almost completely its expression in E. coli. The latter could be restored by adding a tag to the N-terminus of the protein. The tagged proteins truncated by 15 or more amino acids showed an anomalous migration in SDS-PAGE. Truncation by more than 20 amino acids resulted in a complete loss of ability of tagged C protein to bind to viral RNA. These results provide clues to the early events in the C protein - RNA interactions leading to C protein oligomerization, RNA encapsidation and virion assembly.
Resumo:
An advanced rule-based Transit Signal Priority (TSP) control method is presented in this paper. An on-line transit travel time prediction model is the key component of the proposed method, which enables the selection of the most appropriate TSP plans for the prevailing traffic and transit condition. The new method also adopts a priority plan re-development feature that enables modifying or even switching the already implemented priority plan to accommodate changes in the traffic conditions. The proposed method utilizes conventional green extension and red truncation strategies and also two new strategies including green truncation and queue clearance. The new method is evaluated against a typical active TSP strategy and also the base case scenario assuming no TSP control in microsimulation. The evaluation results indicate that the proposed method can produce significant benefits in reducing the bus delay time and improving the service regularity with negligible adverse impacts on the non-transit street traffic.
Resumo:
The steady problem of free surface flow due to a submerged line source is revisited for the case in which the fluid depth is finite and there is a stagnation point on the free surface directly above the source. Both the strength of the source and the fluid speed in the far field are measured by a dimensionless parameter, the Froude number. By applying techniques in exponential asymptotics, it is shown that there is a train of periodic waves on the surface of the fluid with an amplitude which is exponentially small in the limit that the Froude number vanishes. This study clarifies that periodic waves do form for flows due to a source, contrary to a suggestion by Chapman & Vanden-Broeck (2006, J. Fluid Mech., 567, 299--326). The exponentially small nature of the waves means they appear beyond all orders of the original power series expansion; this result explains why attempts at describing these flows using a finite number of terms in an algebraic power series incorrectly predict a flat free surface in the far field.
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
Tobacco plants were transformed with a chimeric transgene comprising sequences encoding β-glucuronidase (GUS) and the satellite RNA (satRNA) of cereal yellow dwarf luteovirus. When transgenic plants were infected with potato leafroll luteovirus (PLRV), which replicated the transgene-derived satRNA to a high level, the satellite sequence of the GUS:Sat transgene became densely methylated. Within the satellite region, all 86 cytosines in the upper strand and 73 of the 75 cytosines in the lower strand were either partially or fully methylated. In contrast, very low levels of DNA methylation were detected in the satellite sequence of the transgene in uninfected plants and in the flanking nonsatellite sequences in both infected and uninfected plants. Substantial amounts of truncated GUS:Sat RNA accumulated in the satRNA-replicating plants, and most of the molecules terminated at nucleotides within the first 60 bp of the satellite sequence. Whereas this RNA truncation was associated with high levels of satRNA replication, it appeared to be independent of the levels of DNA methylation in the satellite sequence, suggesting that it is not caused by methylation. All the sequenced GUS:Sat DNA molecules were hypermethylated in plants with replicating satRNA despite the phloem restriction of the helper PLRV. Also, small, sense and antisense ∼22 nt RNAs, derived from the satRNA, were associated with the replicating satellite. These results suggest that the sequence-specific DNA methylation spread into cells in which no satRNA replication occurred and that this was mediated by the spread of unamplified satRNA and/or its associated 22 nt RNA molecules.
Resumo:
The orphan nuclear receptor liver receptor homologue-1 (LRH-1) has roles in the development, cholesterol and bile acid homeostasis, and steroidogenesis. It also enhances proliferation and cell cycle progression of cancer cells. In breast cancer, LRH-1 expression is associated with invasive breast cancer; positively correlates with ERα status and aromatase activity; and promotes oestrogen-dependent cell proliferation. However, the mechanism of action of LRH-1 in breast cancer epithelial cells is still not clear. By silencing or over-expressing LRH-1 in ER-positive MCF-7 and ER-negative MDA-MB-231 breast cancer cells, we have demonstrated that LRH-1 promotes motility and cell invasiveness. Similar effects were observed in the non-tumourigenic mammary epithelial cell line, MCF-10A. Remodelling of the actin cytoskeleton and E-cadherin cleavage was observed with LRH-1 over-expression, contributing to increased migratory and invasive properties. Additionally, in LRH-1 over-expressing cells, the truncation of the 120 kDa E-cadherin to the inactive 97 kDa form was observed. These post-translational modifications in E-cadherin may be associated with LRH-1-dependent changes to matrix metalloproteinase 9 expression. These findings suggest a new role of LRH-1 in promoting migration and invasion in breast cancer, independent of oestrogen sensitivity. Therefore, LRH-1 may represent a new target for breast cancer therapeutics.
Resumo:
Alignment-free methods, in which shared properties of sub-sequences (e.g. identity or match length) are extracted and used to compute a distance matrix, have recently been explored for phylogenetic inference. However, the scalability and robustness of these methods to key evolutionary processes remain to be investigated. Here, using simulated sequence sets of various sizes in both nucleotides and amino acids, we systematically assess the accuracy of phylogenetic inference using an alignment-free approach, based on D2 statistics, under different evolutionary scenarios. We find that compared to a multiple sequence alignment approach, D2 methods are more robust against among-site rate heterogeneity, compositional biases, genetic rearrangements and insertions/deletions, but are more sensitive to recent sequence divergence and sequence truncation. Across diverse empirical datasets, the alignment-free methods perform well for sequences sharing low divergence, at greater computation speed. Our findings provide strong evidence for the scalability and the potential use of alignment-free methods in large-scale phylogenomics.
Resumo:
Improved sequencing technologies offer unprecedented opportunities for investigating the role of rare genetic variation in common disease. However, there are considerable challenges with respect to study design, data analysis and replication. Using pooled next-generation sequencing of 507 genes implicated in the repair of DNA in 1,150 samples, an analytical strategy focused on protein-truncating variants (PTVs) and a large-scale sequencing case-control replication experiment in 13,642 individuals, here we show that rare PTVs in the p53-inducible protein phosphatase PPM1D are associated with predisposition to breast cancer and ovarian cancer. PPM1D PTV mutations were present in 25 out of 7,781 cases versus 1 out of 5,861 controls (P = 1.12 × 10-5), including 18 mutations in 6,912 individuals with breast cancer (P = 2.42 × 10-4) and 12 mutations in 1,121 individuals with ovarian cancer (P = 3.10 × 10-9). Notably, all of the identified PPM1D PTVs were mosaic in lymphocyte DNA and clustered within a 370-base-pair region in the final exon of the gene, carboxy-terminal to the phosphatase catalytic domain. Functional studies demonstrate that the mutations result in enhanced suppression of p53 in response to ionizing radiation exposure, suggesting that the mutant alleles encode hyperactive PPM1D isoforms. Thus, although the mutations cause premature protein truncation, they do not result in the simple loss-of-function effect typically associated with this class of variant, but instead probably have a gain-of-function effect. Our results have implications for the detection and management of breast and ovarian cancer risk. More generally, these data provide new insights into the role of rare and of mosaic genetic variants in common conditions, and the use of sequencing in their identification.
Resumo:
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim–Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Resumo:
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim–Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Resumo:
Micropolar fluid flow over a semi-infinite flat plate has been described by using the parabolic co-ordinates and the method of series truncation in order to study the flow for low to large Reynolds numbers. These co-ordinates permit to study the flow regime at the leading edge. Numerical results have been presented for different Reynolds numbers. Results show a reduction in skin friction.