976 resultados para least common subgraph algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a new dictionary learning algorithm called the l(1)-K-svp, by minimizing the l(1) distortion on the data term. The proposed formulation corresponds to maximum a posteriori estimation assuming a Laplacian prior on the coefficient matrix and additive noise, and is, in general, robust to non-Gaussian noise. The l(1) distortion is minimized by employing the iteratively reweighted least-squares algorithm. The dictionary atoms and the corresponding sparse coefficients are simultaneously estimated in the dictionary update step. Experimental results show that l(1)-K-SVD results in noise-robustness, faster convergence, and higher atom recovery rate than the method of optimal directions, K-SVD, and the robust dictionary learning algorithm (RDL), in Gaussian as well as non-Gaussian noise. For a fixed value of sparsity, number of dictionary atoms, and data dimension, l(1)-K-SVD outperforms K-SVD and RDL on small training sets. We also consider the generalized l(p), 0 < p < 1, data metric to tackle heavy-tailed/impulsive noise. In an image denoising application, l(1)-K-SVD was found to result in higher peak signal-to-noise ratio (PSNR) over K-SVD for Laplacian noise. The structural similarity index increases by 0.1 for low input PSNR, which is significant and demonstrates the efficacy of the proposed method. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, analysis of extending the linear modulation range of a zero common-mode voltage (CMV) operated n-level inverter by allowing reduced CMV switching is presented. A new hybrid seven-level inverter topology with a single DC supply is also presented in this study and inverter operation for zero and reduced CMV is analysed. Each phase of the inverter is realised by cascading two three-level flying capacitor inverters with a half-bridge module in between. Proposed inverter topology is operated with zero CMV for modulation index <86% and is operated with a CMV magnitude of V-dc/18 to extend the modulation range up to 96%. Experimental results are presented for zero CMV operation and for reduced common voltage operation to extend the linear modulation range. A capacitor voltage balancing algorithm is designed utilising the pole voltage redundancies of the inverter, which works for every sampling instant to correct the capacitor voltage irrespective of load power factor and modulation index. The capacitor voltage balancing algorithm is tested for different modulation indices and for various transient conditions, to validate the proposed topology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Common salvinia (Salvinia minima Baker) is an exotic floating fern that has been in the U.S. from at least 1928(Small 1931). Its pest status in Florida is less clear perhaps due to the presence of the specialized herbivore Cyrtobagous salviniae (Coleoptera: Curculionidae). Our objective was to sample populations of adult C. salviniae in south Florida in order to assess temporal abundance and estimate density on common salvinia. (PDF has 4 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the use of otolith morphology to indicate the stock structure of an exploited serranid coral reef fish, Plectropomus leopardus, on the Great Barrier Reef (GBR), Australia. Otoliths were measured by traditional one-and two-dimensional measures (otolith length, width, area, perimeter, circularity, and rectangularity), as well as by Fourier analysis to capture the finer details of otolith shape. Variables were compared among four regions of the GBR separated by hundreds of kilometers, as well as among three reefs within each region, hundreds of meters to tens of kilometers apart. The temporal stability in otolith structure was examined by comparing two cohorts of fully recruited four-year-old P. leopardus collected two years before and two years after a signif icant disturbance in the southern parts of the GBR caused by a large tropical cyclone in March 1997. Results indicated the presence of at least two stocks of P. leopardus, although the structure of each stock varied depending on the cohort considered. The results highlight the importance of incorporating data from several years in studies using otolith morphology to discriminate temporary and possibly misleading signals from those that indicate persistent spatial structure in stocks. We conclude that otolith morphology can be used as an initial step to direct further research on groups of P. leopardus that have lived at least a part of their life in different environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A laboratory-feeding trail was conducted for 45 days with fry of common carp Cyprinus carpio L. (0.45±0.03g) in aquaria in a static indoor fish rearing system. The fry were fed on a pelleted diet containing 33% crude protein having fishmeal as major protein source. The fish fry in five treatments A, B, C, D, and E, each with two replicates were fed on 5% daily ration divided into different feeding frequencies of 2, 3, 4, 5 and 6 times a day respectively in order to observe the growth performance. Each replicate contained 15 fry having total initial weight of 6.87±0.31g. At the end of the feeding trial, significantly different and higher (p<0.05) growth response was observed in treatment C having a feeding frequencies of 4 times a day. Significantly the highest and the lowest percent growth of 334.30 and 218.91% were observed in fish fed on the diet (Treatment C) with 4 times and (Treatment A) 2 times feeding frequencies per day, respectively. Food conversion ratio (FCR) of 1.78 was significantly higher (pleast value of 1.22 was obtained in fish fed on the diet with 4 times daily feeding. Protein efficiency ratio (PER) ranged from 1.68 in fish in treatment A having a feeding frequencies of 2 times per day to 2.48 in fish in treatment C fed on the diet with 4 times feeding frequencies. Other growth parameters viz, specific growth rate (SGR), apparent protein digestibility (ADP) were also higher in treatment C than the other treatments. The results of the present study demonstrated that the growth performance of C. carpio was the best at 4 times feeding in a day using 33% dietary protein containing fish meal as major protein source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

F-4 generation of human growth hormone (hGH) gene-transgenic red common carp, and the non-transgenic controls were fed for 8 weeks on purified diets with 20%, 30% or 40% protein. Analysis of whole-body amino acids showed that the proportions of lysine, leucine, phenylalanine, valine and alanine, as percentages of body protein, increased significantly, while those of arginine, glutamic acid and tyrosine decreased, with increases in dietary protein level in at least one strain of fish. Proportions of the other amino acids were unaffected by the diets. The proportions of lysine and arginine were significantly higher, while those of leucine and alanine were lower in the transgenics than in the controls in at least one diet group. Proportions of the other amino acids were unaffected by strain. The results suggest that the whole-body amino acid profile of transgenic carp, when expressed as proportions of body protein, was in general, similar to that of the non-transgenic controls. (C) 2000 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross well seismic technique is a new type of geophysical method, which observes the seismic wave of the geologic body by placing both the source and receiver in the wells. By applying this method, it averted the absorption to high-frequency component of seismic signal caused by low weathering layers, thus, an extremely high-resolution seismic signal can be acquired. And extremely fine image of cross well formations, structure, and reservoir can be achieved as well. An integrated research is conducted to the high-frequency S-wave and P-wave data and some other data to determine the small faults, small structure and resolving the issues concerning the thin bed and reservoir's connectivity, fluid distribution, steam injection and fracture. This method connects the high-resolution surface seismic, logging and reservoir engineering. In this paper, based on the E & P situation in the oilfield and the theory of geophysical exploration, a research is conducted on cross well seismic technology in general and its important issues in cross well seismic technology in particular. A technological series of integrated field acquisition, data processing and interpretation and its integrated application research were developed and this new method can be applied to oilfield development and optimizing oilfield development scheme. The contents and results in this paper are as listed follows: An overview was given on the status quo and development of the cross well seismic method and problems concerning the cross well seismic technology and the difference in cross well seismic technology between China and international levels; And an analysis and comparison are given on foreign-made field data acquisition systems for cross-well seismic and pointed out the pros and cons of the field systems manufactured by these two foreign companies and this is highly valuable to import foreign-made cross well seismic field acquisition system for China. After analyses were conducted to the geometry design and field data for the cross well seismic method, a common wave field time-depth curve equation was derived and three types of pipe waves were discovered for the first time. Then, a research was conducted on the mechanism for its generation. Based on the wave field separation theory for cross well seismic method, we believe that different type of wave fields in different gather domain has different attributes characteristics, multiple methods (for instance, F-K filtering and median filtering) were applied in eliminating and suppressing the cross well disturbances and successfully separated the upgoing and downgoing waves and a satisfactory result has been achieved. In the area of wave field numerical simulation for cross well seismic method, a analysis was conducted on conventional ray tracing method and its shortcomings and proposed a minimum travel time ray tracing method based on Feraiat theory in this paper. This method is not only has high-speed calculation, but also with no rays enter into "dead end" or "blinded spot" after numerous iterations and it is become more adequate for complex velocity model. This is first time that the travel time interpolation has been brought into consideration, a dynamic ray tracing method with shortest possible path has been developed for the first arrivals of any complex mediums, such as transmission, diffraction and refraction, etc and eliminated the limitation for only traveling from one node to another node and increases the calculation accuracy for minimum travel time and ray tracing path and derives solution and corresponding edge conditions to the fourth-order differential sonic wave equation. The final step is to calculate cross well seismic synthetics for given source and receivers from multiple geological bodies. Thus, real cross-well seismic wave field can be recognized through scientific means and provides important foundation to guide the cross well seismic field geometry designing. A velocity tomographic inversion of the least square conjugated gradient method was developed for cross well seismic velocity tomopgraphic inversion and a modification has been made to object function of the old high frequency ray tracing method and put forward a thin bed oriented model for finite frequency velocity tomographic inversion method. As the theory model and results demonstrates that the method is simple and effective and is very important in seismic ray tomographic imaging for the complex geological body. Based on the characteristics of the cross well seismic algorithm, a processing flow for cross well seismic data processing has been built and optimized and applied to the production, a good section of velocity tomopgrphic inversion and cross well reflection imaging has been acquired. The cross well seismic data is acquired from the depth domain and how to interprets the depth domain data and retrieve the attributes is a brand new subject. After research was conducted on synthetics and trace integration from depth domain for the cross well seismic data interpretation, first of all, a research was conducted on logging constraint wave impedance of cross well seismic data and initially set up cross well seismic data interpretation flows. After it applied and interpreted to the cross well seismic data and a good geological results has been achieved in velocity tomographic inversion and reflection depth imaging and a lot of difficult problems for oilfield development has been resolved. This powerful, new method is good for oilfield development scheme optimization and increasing EOR. Based on conventional reservoir geological model building from logging data, a new method is also discussed on constraining the accuracy of reservoir geological model by applying the high resolution cross well seismic data and it has applied to Fan 124 project and a good results has been achieved which it presents a bight future for the cross well seismic technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. As one important seismic data processing method, Stacking is an effective means to suppress the records noise. Broadening area of surface stacked is more important to enhance genuine reflection signals and suppressing unwanted energy in the form of coherent and random ambient noise. Common reflection surface stack is a macro-model independent seismic imaging method. Based on the similarity of CRP trace gathers in one coherent zone, CRS stack effectively improves S/N ratio by using more CMP trace gathers to stack. It is regarded as one important method of seismic data processing. Performing CRS stack depends on three attributes. However, the equation of CRS is invalid under condition of great offset. In this thesis, one method based on velocity model in depth domain is put forward. Ray tracing is used to determine the traveltime of CRP in one common reflection surface by the least squares method to regress the equation of CRS. Then we stack in the coherent seismic data set according to the traveltime, and get the zero offset section. In the end of flowchart of implementing CRS stack, one method using the dip angle to enhance the ratio of S/N is used. Application of the method on synthetic examples and field seismic records, the results of this method show an excellent performance of the algorithm both in accuracy and efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

XP provides efficient and flexible support for pretty printing in Common Lisp. Its single greatest advantage is that it allows the full benefits of pretty printing to be obtained when printing data structures, as well as when printing program code. XP is efficient, because it is based on a linear time algorithm that uses only a small fixed amount of storage. XP is flexible, because users can control the exact form of the output via a set of special format directives. XP can operate on arbitrary data structures, because facilities are provided for specifying pretty printing methods for any type of object. XP also modifies the way abbreviation based on length, nesting depth, and circularity is supported so that they automatically apply to user-defined functions that perform output ??g., print functions for structures. In addition, a new abbreviation mechanism is introduced that can be used to limit the total numbers of lines printed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

XP provides efficient and flexible support for pretty printing in Common Lisp. Its single greatest advantage is that it allows the full benefits of pretty printing to be obtained when printing data structures, as well as when printing program code. XP is efficient, because it is based on a linear time algorithm that uses a small fixed amount of storage. XP is flexible, because users can control the exact form of the output via a set of special format directives. XP can operate on arbitrary data structures, because facilities are provided for specifying pretty printing methods for any type of object.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The very common GNB3 c.825C>T polymorphism (rs5443), is present in approximately half of all human chromosomes. Significantly the presence of the GNB3 825T allele has been strongly associated, with predisposition to essential hypertension. Paradoxically the presence of the GNB3 825T allele, in exon 10, introduces a pathogenic alternative RNA splice site into the middle of exon 9. To attempt to correct this pathogenic aberrant splicing, we therefore bioinformatically designed, using a Gene Tools® algorithm, a GNB3 specific, antisense morpholino. It was hoped that this morpholino would behave in vitro as either a potential “ splice blocker and/or exon skipper, to both bind and inhibit/reduce the aberrant splicing of the GNB3, 825T allele. On transfecting a human lymphoblast cell line homozygous for the 825T allele, with this antisense morpholino, we encouragingly observed both a significant reduction (from ~58% to ~5%) in the production of the aberrant smaller GNB3 transcript, and a subsequent increase in the normal GNB3 transcript (from ~42% to ~95%). Our results demonstrate the potential use of a GNB3 specific antisense morpholino, as a pharmacogenetic therapy for essential hypertension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.