893 resultados para Feature Extraction Algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This communication describes an improved one-step solid-phase extraction method for the recovery of morphine (M), morphine-3-glucuronide (M3G), and morphine-6-glucuronide (M6G) from human plasma with reduced coextraction of endogenous plasma constituents, compared to that of the authors' previously reported method. The magnitude of the peak caused by endogenous plasma components in the chromatogram that eluted immediately before the retention time of M3G has been reduced (similar to 80%) significantly (p < 0.01) while achieving high extraction efficiencies for the compounds of interest, viz morphine, M6G, and M3G (93.8 +/- 2.5, 91.7 +/- 1.7, and 93.1 +/- 2.2%, respectively). Furthermore, when the improved solid-phase extraction method was used, the extraction cartridge-derived late-eluting peak (retention time 90 to 100 minutes) reported in our previous method, was no longer present in the plasma extracts. Therefore the combined effect of reducing the recovery of the endogenous components of plasma that chromatographed just before the retention time of M3G and the removal of the late-eluting, extraction cartridge-derived peak has resulted in a decrease in the chromatographic run-time to 20 minutes, thereby increasing the sample throughput by up to 100%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method of poly-beta-hydroxybutyrate (PHB) extraction from recombinant E. coli is proposed, using homogenization and centrifugation coupled with sodium hypochlorite treatment. The size of PHB granules and cell debris in homogenates was characterised as a function of the number of homogenization passes. Simulation was used to develop the PHB and cell debris fractionation system, enabling numerical examination of the effects of repeated homogenization and centrifuge-feedrate variation. The simulation provided a good prediction of experimental performance. Sodium hypochlorite treatment was necessary to optimise PHB fractionation. A PHB recovery of 80% at a purity of 96.5% was obtained with the final optimised process. Protein and DNA contained in the resultant product were negligible. The developed process holds promise for significantly reducing the recovery cost associated with PHB manufacture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sensitive and reproducible solid-phase extraction (SPE) method for the quantification of oxycodone in human plasma was developed. Varian Certify SPE cartridges containing both C-8 and benzoic acid functional groups were the most suitable for the extraction of oxycodone and codeine (internal standard), with consistently high (greater than or equal to 80%) and reproducible recoveries. The elution mobile phase consisted of 1.2 ml of butyl chloride-isopropanol (80:20, v/v) containing 2% ammonia. The quantification limit for oxycodone was 5.3 pmol on-column. Within-day and inter-day coefficients of variation were 1.2% and 6.8% respectively for 284 nM oxycodone and 9.5% and 6.2% respectively for 28.4 nM oxycodone using 0.5-ml plasma aliquots. (C) 1998 Elsevier Science BN. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple method for the measurement of pindolol enantiomers by HPLC is presented. Alkalinized serum or urine is extracted with ethyl acetate and the residue remaining after evaporation of the organic layer is then derivatised with (S)-(-)-alpha-methylbenzyl isocyanate. The diastereoisomers of derivatised pindolol and metoprolol (internal standard) are separated by high-performance liquid chromatography (HPLC) using a C-18 silica column and detected using fluorescence (excitation lambda: 215 nm, emission lambda: 320 nm). The assay displays reproducible linearity for pindolol enantiomers with a correlation coefficient of r(2) greater than or equal to 0.998 over the concentration range 8-100 ng ml(-1) for plasma and 0.1-2.5 mu g ml(-1) for urine. The coefficient of variation for accuracy and precision of the quality control samples for both plasma and urine are consistently

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In previous parts of this study we developed procedures for the high-efficiency chemical extraction of soluble and insoluble protein from intact Escherichia coli cells. Although high yields were obtained, extraction of recombinant protein directly from cytoplasmic inclusion bodies led to low product purity due to coextraction of soluble contaminants. In this work, a two-stage procedure for the selective extraction of recombinant protein at high efficiency and high purity is reported. In the first stage, inclusion-body stability is promoted by the addition of 15 mM 2-hydroxyethyldisulfide (2-HEDS), also known as oxidized P-mercaptoethanol, to the permeabil ization buffer (6 M urea + 3 mM ethylenediaminetetra-acetate [EDTA]). 2-HEDS is an oxidizing agent believed to promote disulfide bond formation, rendering the inclusion body resistant to solubilization in 6 M urea. Contaminating proteins are separated from the inclusion-body fraction by centrifugation. in the second stage, disulfide bonds are readily eliminated by including reducing agent (20 mM dithiothreitol [DTT]) into the permeabilization buffer. Extraction using this selective two-stage process yielded an 81% (w/w) recovery of the recombinant protein Long-R-3-IGF-I from inclusion bodies located in the cytoplasm of intact E. coli, at a purity of 46% (w/w). This was comparable to that achieved by conventional extraction (mechanical disruption followed by centrifugation and solubilization). A pilot-scale procedure was also demonstrated using a stirred reactor and diafiltration. This is the first reported study that achieves both high extraction efficiency and selectivity by the chemical treatment of cytoplasmic inclusion bodies in intact bacterial cells. (C) 1999 John Wiley & Sons, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study we demonstrate a new in-fermenter chemical extraction procedure that degrades the cell wall of Escherichia coli and releases inclusion bodies (IBs) into the fermentation medium. We then prove that cross-flow microfiltration can be used to remove 91% of soluble contaminants from the released IBs. The extraction protocol, based on a combination of Triton X-100, EDTA, and intracellular T7 lysozyme, effectively released most of the intracellular soluble content without solubilising the IBs. Cross-flow microfiltration using a 0.2 mum ceramic membrane successfully recovered the granulocyte macrophagecolony stimulating factor (GM-CSF) IBs with removal of 91% of the soluble contaminants and virtually no loss of IBs to the permeate. The filtration efficiency, in terms of both flux and transmission, was significantly enhanced by infermenter Benzonase(R) digestion of nucleic acids following chemical extraction. Both the extraction and filtration methods exerted their efficacy directly on a crude fermentation broth, eliminating the need for cell recovery and re-suspension in buffer. The processes demonstrated here can all be performed using just a fermenter and a single cross-flow filtration unit, demonstrating a high level of process intensification. Furthermore, there is considerable scope to also use the microfiltration system to subsequently solubilise the IBs, to separate the denatured protein from cell debris, and to refold the protein using diafiltration. In this way refolded protein can potentially be obtained, in a relatively pure state, using only two unit operations. (C) 2004 Wiley Periodicals Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.