930 resultados para Post-processing
Resumo:
The performance of a transonic fan operating within nonuniform inlet flow remains a key concern for the design and operability of a turbofan engine. This paper applies computational methods to improve the understanding of the interaction between a transonic fan and an inlet total pressure distortion. The test case studied is the NASA rotor 67 stage operating with a total pressure distortion covering a 120-deg sector of the inlet flow field. Full-annulus, unsteady, three-dimensional CFD has been used to simulate the test rig installation and the full fan assembly operating with inlet distortion. Novel post-processing methods have been applied to extract the fan performance and features of the interaction between the fan and the nonuniform inflow. The results of the unsteady computations agree well with the measurement data. The local operating condition of the fan at different positions around the annulus has been tracked and analyzed, and this is shown to be highly dependent on the swirl and mass flow redistribution that the rotor induces ahead of it due to the incoming distortion. The upstream flow effects lead to a variation in work input that determines the distortion pattern seen downstream of the fan stage. In addition, the unsteady computations also reveal more complex flow features downstream of the fan stage, which arise due to the three dimensionality of the flow and unsteadiness. © 2012 American Society of Mechanical Engineers.
Resumo:
This work addresses the problem of deriving F0 from distanttalking speech signals acquired by a microphone network. The method here proposed exploits the redundancy across the channels by jointly processing the different signals. To this purpose, a multi-microphone periodicity function is derived from the magnitude spectrum of all the channels. This function allows to estimate F0 reliably, even under reverberant conditions, without the need of any post-processing or smoothing technique. Experiments, conducted on real data, showed that the proposed frequency-domain algorithm is more suitable than other time-domain based ones.
Resumo:
The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.
Resumo:
The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.
Resumo:
Understanding and controlling the hierarchical self-assembly of carbon nanotubes (CNTs) is vital for designing materials such as transparent conductors, chemical sensors, high-performance composites, and microelectronic interconnects. In particular, many applications require high-density CNT assemblies that cannot currently be made directly by low-density CNT growth, and therefore require post-processing by methods such as elastocapillary densification. We characterize the hierarchical structure of pristine and densified vertically aligned multi-wall CNT forests, by combining small-angle and ultra-small-angle x-ray scattering (USAXS) techniques. This enables the nondestructive measurement of both the individual CNT diameter and CNT bundle diameter within CNT forests, which are otherwise quantified only by delicate and often destructive microscopy techniques. Our measurements show that multi-wall CNT forests grown by chemical vapor deposition consist of isolated and bundled CNTs, with an average bundle diameter of 16 nm. After capillary densification of the CNT forest, USAXS reveals bundles with a diameter 4 m, in addition to the small bundles observed in the as-grown forests. Combining these characterization methods with new CNT processing methods could enable the engineering of macro-scale CNT assemblies that exhibit significantly improved bulk properties. © 2011 American Institute of Physics.
Resumo:
This paper proposes an ultra-low power CMOS random number generator (RING), which is based on an oscillator-sampling architecture. The noisy oscillator consists of a dual-drain MOS transistor, a noise generator and a voltage control oscillator. The dual-drain MOS transistor can bring extra-noise to the drain current or the output voltage so that the jitter of the oscillator is much larger than the normal oscillator. The frequency division ratio of the high-frequency sampling oscillator and the noisy oscillator is small. The RNG has been fabricated in a 0.35 mu m CMOS process. It can produce good quality bit streams without any post-processing. The bit rate of this RNG could be as high as 100 kbps. It has a typical ultra-low power dissipation of 0.91 mu W. This novel circuit is a promising unit for low power system and communication applications. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
For the purpose of manufacturing cigarette filter tows and filter rods, the melt-spinning, adhesion and adsorption properties of poly(lactic acid) were studied. The rheological measurements were performed to examine the effects of various processing conditions on the melt flowability and spinnability, including those of residual moisture. The melt spinning and post-processings were followed by determining the molecular weight, thermal and mechanical properties of the fibers. The results obtained were useful to establishing the specification of the PLA resins for filter tows and filter rods manufacturing and to choosing proper melt-spinning and post-processing technologies.
Resumo:
依据6000米自治水下机器人及其长基线声学定位系统现有的导航设备,将测距声信标和机器人载体携带的低成本导航传感器:涡轮式计程仪,压力传感器以及TCM2电子罗盘测量的导航数据相融合,分别提出两种基于EKF的导航数据融合算法,对机器人的位置以及水流参数进行估计,解决复杂环境下的深水机器人位置估计问题.蒙特卡洛仿真实验和湖上试验数据后处理表明,设计的位置估计算法收敛快,精度高,计算时间小,能够满足深水机器人的导航需要.
Resumo:
Superfine mineral materials are mainly resulted from the pulverization of natural mineral resources, and are a type of new materials that can replace traditional materials and enjoy the most extensive application and the highest degree of consumption in the present day market. As a result, superfine mineral materials have a very broad and promising prospect in terms of market potential. Superfine pulverization technology is the only way for the in-depth processing of most of the traditional materials, and is also one of the major means for which mineral materials can realize their application. China is rich in natural resources such as heavy calcite, kaolin, wollastonite, etc., which enjoy a very wide market of application in paper making, rubber, plastics, painting, coating, medicine, environment-friendly recycle paper and fine chemical industries, for example. However, because the processing of these resources is generally at the low level, economic benefit and scale for the processing of these resources have not been realized to their full potential even up to now. Big difference in product indices and superfine processing equipment and technologies between China and advanced western countries still exists. Based on resource assessment and market potential analysis, an in-depth study was carried out in this paper about the superfine pulverization technology and superfine pulverized mineral materials from the point of mineralogical features, determination of processing technologies, analytical methods and applications, by utilizing a variety of modern analytical methods in mineralogy, superfine pulverization technology, macromolecular chemistry, material science and physical chemistry together with computer technology and so on. The focus was placed on the innovative study about the in-depth processing technology and the processing apparatus for kaolin and heavy calcite as well as the application of superfine products. The main contents and the major achievements of this study are listed as follows: 1. Superfine pulverization processing of mineral materials shall be integrated with the study of their crystal structures and chemical composition. And special attention shall be put on the post-processing technologies, rather than on the indices for particle size, of these materials, based on their fields of application. Both technical feasibility and economic feasibility shall be taken into account for the study about superfine pulverization technologies, since these two kinds of feasibilities serve as the premise for the industrialized application of superfine pulverized mineral materials. Based on this principle, preposed chemical treatment method, technology of synchronized superfine pulverization and gradation, processing technology and apparatus of integrated modification and depolymerization were utilized in this study, and narrow distribution in terms of particle size, good dispersibility, good application effects, low consumption as well as high effectiveness of superfine products were achieved in this study. Heavy calcite and kaolin are two kinds of superfine mineral materials that enjoy the highest consumption in the industry. Heavy calcite is mainly applied in paper making, coating and plastics industries, the hard kaolin in northern China is mainly used in macromolecular materials and chemical industries, while the soft kaolin in southern China is mainly used for paper making. On the other hand, superfine pulverized heavy calcite and kaolin can both be used as the functional additives to cement, a kind of material that enjoys the biggest consumption in the world. A variety of analytical methods and instruments such as transmission and scanning electron microscopy, X-ray diffraction analysis, infrared analysis, laser particle size analysis and so on were applied for the elucidation of the properties and the mechanisms for the functions of superfine mineral materials as used in plastics and high-performance cement. Detection of superfine mineral materials is closely related to the post-processing and application of these materials. Traditional detection and analytical methods for superfine mineral materials include optical microscopy, infrared spectral analysis and a series of microbeam techniques such as transmission and scanning electron microscopy, X-ray diffraction analysis, and so on. In addition to these traditional methods, super-weak luminescent photon detection technology of high precision, high sensitivity and high signal to noise ratio was also utilized by the author for the first time in the study of superfine mineral materials, in an attempt to explore a completely new method and means for the study of the characterization of superfine materials. The experimental results are really exciting! The innovation of this study is represented in the following aspects: 1. In this study, preposed chemical treatment method, technology of synchronized superfine pulverization and gradation, processing technology and apparatus of integrated modification and depolymerization were utilized in an innovative way, and narrow distribution in terms of particle size, good dispersibility, good application effects, low consumption as well as high effectiveness of superfine products were achieved in the industrialized production process*. Moreover, a new modification technology and related directions for producing the chemicals were invented, and the modification technology was even awarded a patent. 2. The detection technology of super-weak luminescent photon of high precision, high sensitivity and high signal to noise ratio was utilized for the first time in this study to explore the superfine mineral materials, and the experimental results can be compared with those acquired with scanning electron microscopy and has demonstrated its unique advantages. It can be expected that further study may possibly help to result in a completely new method and means for the characterization of superfine materials. 3. During the heating of kaolinite and its decomposition into pianlinite, the diffraction peaks disappear gradually. First comes the disappearance of the reflection of the basal plane (001), and then comes the slow disappearance of the (hkl) diffraction peaks. And this was first discovered during the experiments by the author, and it has never before reported by other scholars. 4. The first discovery of the functions that superfine mineral materials can be used as dispersants in plastics, and the first discovery of the comprehensive functions that superfine mineral materials can also be used as activators, water-reducing agents and aggregates in high-performance cement were made in this study, together with a detailed discussion. This study was jointly supported by two key grants from Guangdong Province for Scientific and Technological Research in the 10th Five-year Plan Period (1,200,000 yuan for Preparation technology, apparatus and post-processing research by using sub-micron superfine pulverization machinery method, and 300,000 yuan for Method and instruments for biological photon technology in the characterization of nanometer materials), and two grants from Guangdong Province for 100 projects for scientific and technological innovation (700,000 yuan for Pilot experimentation of superfine and modified heavy calcite used in paper-making, rubber and plastics industry, and 400,000 yuan for Study of superfine, modified wollastonite of large length-to-diameter ratio).
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.
Resumo:
Standard structure from motion algorithms recover 3D structure of points. If a surface representation is desired, for example a piece-wise planar representation, then a two-step procedure typically follows: in the first step the plane-membership of points is first determined manually, and in a subsequent step planes are fitted to the sets of points thus determined, and their parameters are recovered. This paper presents an approach for automatically segmenting planar structures from a sequence of images, and simultaneously estimating their parameters. In the proposed approach the plane-membership of points is determined automatically, and the planar structure parameters are recovered directly in the algorithm rather than indirectly in a post-processing stage. Simulated and real experimental results show the efficacy of this approach.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
We conducted a pilot study on 10 patients undergoing general surgery to test the feasibility of diffuse reflectance spectroscopy in the visible wavelength range as a noninvasive monitoring tool for blood loss during surgery. Ratios of raw diffuse reflectance at wavelength pairs were tested as a first-pass for estimating hemoglobin concentration. Ratios can be calculated easily and rapidly with limited post-processing, and so this can be considered a near real-time monitoring device. We found the best hemoglobin correlations were when ratios at isosbestic points of oxy- and deoxyhemoglobin were used, specifically 529/500 nm. Baseline subtraction improved correlations, specifically at 520/509 nm. These results demonstrate proof-of-concept for the ability of this noninvasive device to monitor hemoglobin concentration changes due to surgical blood loss. The 529/500 nm ratio also appears to account for variations in probe pressure, as determined from measurements on two volunteers.
Resumo:
PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.