974 resultados para Multiple attenuation. Deconvolution. Seismic processing
Resumo:
The maturation of 5S RNA in Escherichia coli is poorly understood. Although it is known that large precursors of 5S RNA accumulate in mutant cells lacking the endoribonuclease-RNase E, almost nothing is known about how the mature 5' and 3' termini of these molecules are generated. We have examined 5S RNA maturation in wild-type and single- or multiple-exoribonuclease-deficient cells by Northern blot and primer-extension analysis. Our results indicate that no mature 5S RNA is made in RNase T-deficient strains. Rather, 5S RNA precursors containing predominantly 2 extra nucleotides at the 3' end accumulate. Apparently, these 5S RNAs are functional inasmuch as mutant cells are viable, growing only slightly slower than wild type. Purified RNase T can remove the extra 3' residues, showing that it is directly involved in the trimming reaction. In contrast, mutations affecting other 3' exoribonucleases have no effect on 5S RNA maturation. Approximately 90% of the 5S RNAs in both wild-type and RNase T- cells contain mature 5' termini, indicating that 5' processing is independent of RNase T action. These data identify the enzyme responsible for generating the mature 3' terminus of 5S RNA molecules and also demonstrate that a completely processed 5S RNA molecule is not essential for cell survival.
Resumo:
This letter presents signal processing techniques to detect a passive thermal threshold detector based on a chipless time-domain ultrawideband (UWB) radio frequency identification (RFID) tag. The tag is composed by a UWB antenna connected to a transmission line, in turn loaded with a biomorphic thermal switch. The working principle consists of detecting the impedance change of the thermal switch. This change occurs when the temperature exceeds a threshold. A UWB radar is used as the reader. The difference between the actual time sample and a reference signal obtained from the averaging of previous samples is used to determine the switch transition and to mitigate the interferences derived from clutter reflections. A gain compensation function is applied to equalize the attenuation due to propagation loss. An improved method based on the continuous wavelet transform with Morlet wavelet is used to overcome detection problems associated to a low signal-to-noise ratio at the receiver. The average delay profile is used to detect the tag delay. Experimental measurements up to 5 m are obtained.
Resumo:
The Gaia-ESO Survey is a large public spectroscopic survey that aims to derive radial velocities and fundamental parameters of about 105 Milky Way stars in the field and in clusters. Observations are carried out with the multi-object optical spectrograph FLAMES, using simultaneously the medium-resolution (R ~ 20 000) GIRAFFE spectrograph and the high-resolution (R ~ 47 000) UVES spectrograph. In this paper we describe the methods and the software used for the data reduction, the derivation of the radial velocities, and the quality control of the FLAMES-UVES spectra. Data reduction has been performed using a workflow specifically developed for this project. This workflow runs the ESO public pipeline optimizing the data reduction for the Gaia-ESO Survey, automatically performs sky subtraction, barycentric correction and normalisation, and calculates radial velocities and a first guess of the rotational velocities. The quality control is performed using the output parameters from the ESO pipeline, by a visual inspection of the spectra and by the analysis of the signal-to-noise ratio of the spectra. Using the observations of the first 18 months, specifically targets observed multiple times at different epochs, stars observed with both GIRAFFE and UVES, and observations of radial velocity standards, we estimated the precision and the accuracy of the radial velocities. The statistical error on the radial velocities is σ ~ 0.4 km s-1 and is mainly due to uncertainties in the zero point of the wavelength calibration. However, we found a systematic bias with respect to the GIRAFFE spectra (~0.9 km s-1) and to the radial velocities of the standard stars (~0.5 km s-1) retrieved from the literature. This bias will be corrected in the future data releases, when a common zero point for all the set-ups and instruments used for the survey is be established.
Resumo:
This letter presents a method to model propagation channels for estimation, in which the sampling scheme can be arbitrary. Additionally, the method yields accurate models, with a size that converges to the channel duration, measured in Nyquist periods. It can be viewed as an improvement on the usual discretization based on regular sampling at the Nyquist rate. The method is introduced in the context of multiple delay estimation using the MUSIC estimator, and is assessed through a numerical example.
Resumo:
The flanks of an oil-bearing structure were investigated to determine the most likely reservoir geometry in an area where the seismic path forks in preparation for a field equity redetermination. Two alternate hypotheses were evaluated: a “high fork model” where the reservoir top follows the higher of the two paths and a “low fork model” in which the reservoir follows the lower path. I took four approaches to evaluate the hypotheses: 1) Depth conversion by multiple velocity models to evaluate the fidelity of the picked horizon on models that did not contain a fork; 2) hand interpretation around the areas of high uncertainty to eliminate their influence; 3) path choice effects on the plausibility of the environment of deposition; and subsurface geometry modeling with synthetics to compare calculated 1D seismic responses with current data. Investigation established that both fork interpretations cannot follow a continuous seismic reflector but are otherwise equally plausible. Interval modeling revealed several structure scenarios, supporting both high and low fork, which fit the seismic data. To augment the lower fork argument, a scenario with an additional sand interval off-structure is recommended, for simplicity and reasonability.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Background Many clinical trials of DC-based immunotherapy involve administration of monocyte-derived DCs (Mo-DC) on multiple occasions. We aimed to determine the optimal cell processing procedures and timing (leukapheresis, RBC depletion and cryopreservation) for generation of Mo-DC for clinical purposes. Methods Leukapheresis was undertaken using a COBE Spectra. Two instrument settings were compared - the standard semi-automated software (Version 4.7) (n = 10) and the fully automated software (Version 6.0) (n = 40). Density gradient centrifugation using Ficoll, Percoll, a combination of these methods or neither for RBC depletion were compared. Outcomes (including cell yield and purity) were compared for cryopreserved unmanipulated monocytes and cryopreserved Mo-DC. Results Software Version 6.0 provided significantly better enrichment for monocytes (P
Resumo:
A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.
Resumo:
Animal color pattern phenotypes evolve rapidly. What influences their evolution? Because color patterns are used in communication, selection for signal efficacy, relative to the intended receiver's visual system, may explain and predict the direction of evolution. We investigated this in bowerbirds, whose color patterns consist of plumage, bower structure, and ornaments and whose visual displays are presented under predictable visual conditions. We used data on avian vision, environmental conditions, color pattern properties, and an estimate of the bowerbird phylogeny to test hypotheses about evolutionary effects of visual processing. Different components of the color pattern evolve differently. Plumage sexual dimorphism increased and then decreased, while overall (plumage plus bower) visual contrast increased. The use of bowers allows relative crypsis of the bird but increased efficacy of the signal as a whole. Ornaments do not elaborate existing plumage features but instead are innovations (new color schemes) that increase signal efficacy. Isolation between species could be facilitated by plumage but not ornaments, because we observed character displacement only in plumage. Bowerbird color pattern evolution is at least partially predictable from the function of the visual system and from knowledge of different functions of different components of the color patterns. This provides clues to how more constrained visual signaling systems may evolve.
Resumo:
In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.
Resumo:
Motivation: While processing of MHC class II antigens for presentation to helper T-cells is essential for normal immune response, it is also implicated in the pathogenesis of autoimmune disorders and hypersensitivity reactions. Sequence-based computational techniques for predicting HLA-DQ binding peptides have encountered limited success, with few prediction techniques developed using three-dimensional models. Methods: We describe a structure-based prediction model for modeling peptide-DQ3.2 beta complexes. We have developed a rapid and accurate protocol for docking candidate peptides into the DQ3.2 beta receptor and a scoring function to discriminate binders from the background. The scoring function was rigorously trained, tested and validated using experimentally verified DQ3.2 beta binding and non-binding peptides obtained from biochemical and functional studies. Results: Our model predicts DQ3.2 beta binding peptides with high accuracy [area under the receiver operating characteristic (ROC) curve A(ROC) > 0.90], compared with experimental data. We investigated the binding patterns of DQ3.2 beta peptides and illustrate that several registers exist within a candidate binding peptide. Further analysis reveals that peptides with multiple registers occur predominantly for high-affinity binders.
Resumo:
In many advanced applications, data are described by multiple high-dimensional features. Moreover, different queries may weight these features differently; some may not even specify all the features. In this paper, we propose our solution to support efficient query processing in these applications. We devise a novel representation that compactly captures f features into two components: The first component is a 2D vector that reflects a distance range ( minimum and maximum values) of the f features with respect to a reference point ( the center of the space) in a metric space and the second component is a bit signature, with two bits per dimension, obtained by analyzing each feature's descending energy histogram. This representation enables two levels of filtering: The first component prunes away points that do not share similar distance ranges, while the bit signature filters away points based on the dimensions of the relevant features. Moreover, the representation facilitates the use of a single index structure to further speed up processing. We employ the classical B+-tree for this purpose. We also propose a KNN search algorithm that exploits the access orders of critical dimensions of highly selective features and partial distances to prune the search space more effectively. Our extensive experiments on both real-life and synthetic data sets show that the proposed solution offers significant performance advantages over sequential scan and retrieval methods using single and multiple VA-files.
Resumo:
This paper describes the design of a Multiple Input Multiple Output testbed for assessing various MIMO transmission schemes in rich scattering indoor environments. In the undertaken design, a Field Programmable Gate Array (FPGA) board is used for fast processing of Intermediate Frequency signals. At the present stage, the testbed performance is assessed when the channel emulator between transmitter and receiver modules is introduced. Here, the results are presented for the case when a 2x2 Alamouti scheme for space time coding/decoding at transmitter and receiver is used. Various programming details of the FPGA board along with the obtained simulation results are reported