934 resultados para Direct modified method
Resumo:
The objective of this work was to genotype the single nucleotide polymorphism (SNP) A2959G (AF159246) of bovine CAST gene by PCR-RFLP technique, and to report its use for the first time. For this, 147 Bos indicus and Bos taurus x Bos indicus animals were genotyped. The accuracy of the method was confirmed through the direct sequencing of PCR products of nine individuals. The lowest frequency of the meat tenderness favorable allele (A) in Bos indicus was confirmed. The use of PCR-RFLP for the genotyping of the bovine CAST gene SNP was shown to be robust and inexpensive, which will greatly facilitate its analysis by laboratories with basic structure.
Resumo:
Purpose: More than five hundred million direct dental restorations are placed each year worldwide. In about 55% of the cases, resin composites or compomers are used, and in 45% amalgam. The longevity of posterior resin restorations is well documented. However, data on resin composites that are placed without enamel/dentin conditioning and resin composites placed with self-etching adhesive systems are missing. Material and Methods: The database SCOPUS was searched for clinical trials on posterior resin composites without restricting the search to the year of publication. The inclusion criteria were: (1) prospective clinical trial with at least 2 years of observation; (2) minimum number of restorations at last recall = 20; (3) report on dropout rate; (4) report of operative technique and materials used; (5) utilization of Ryge or modified Ryge evaluation criteria. For amalgam, only those studies were included that directly compared composite resin restorations with amalgam. For the statistical analysis, a linear mixed model was used with random effects to account for the heterogeneity between the studies. P-values under 0.05 were considered significant. Results: Of the 373 clinical trials, 59 studies met the inclusion criteria. In 70% of the studies, Class II and Class I restorations had been placed. The overall success rate of composite resin restorations was about 90% after 10 years, which was not different from that of amalgam. Restorations with compomers had a significantly lower longevity. The main reason for replacement were bulk fractures and caries adjacent to restorations. Both of these incidents were infrequent in most studies and accounted only for about 6% of all replaced restorations after 10 years. Restorations with macrofilled composites and compomer suffered significantly more loss of anatomical form than restorations with other types of material. Restorations that were placed without enamel acid etching and a dentin bonding agent showed significantly more marginal staining and detectable margins compared to those restorations placed using the enamel-etch or etch-and-rinse technique; restorations with self-etching systems were between the other groups. Restorations with compomer suffered significantly more chippings (repairable fracture) than restorations with other materials, which did not statistically differ among each other. Restorations that were placed with a rubber-dam showed significantly fewer material fractures that needed replacement, and this also had a significant effect on the overall longevity. Conclusion: Restorations with hybrid and microfilled composites that were placed with the enamel-etching technique and rubber-dam showed the best overall performance; the longevity of these restorations was similar to amalgam restorations. Compomer restorations, restorations placed with macrofilled composites, and resin restorations with no-etching or self-etching adhesives demonstrated significant shortcomings and shorter longevity.
Resumo:
The objective of this work was to estimate the genetic parameters, genotypic and phenotypic correlations, and direct and indirect genetic gains among and within rubber tree (Hevea brasiliensis) progenies. The experiment was set up at the Municipality of Jaú, SP, Brazil. A randomized complete block design was used, with 22 treatments (progenies), 6 replicates, and 10 plants per plot at a spacing of 3x3 m. Three‑year‑old progenies were assessed for girth, rubber yield, and bark thickness by direct and indirect gains and genotypic correlations. The number of latex vessel rings showed the best correlations, correlating positively and significantly with girth and bark thickness. Selection gains among progenies were greater than within progeny for all the variables analyzed. Total gains obtained were high, especially for girth increase and rubber yield, which were 93.38 and 105.95%, respectively. Young progeny selection can maximize the expected genetic gains, reducing the rubber tree selection cycle.
Resumo:
Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.
Resumo:
Knowledge of the pathological diagnosis before deciding the best strategy for treating parasellar lesions is of prime importance, due to the relative high morbidity and side-effects of open direct approaches to this region, known to be rich in important vasculo-nervous structures. When imaging is not evocative enough to ascertain an accurate pathological diagnosis, a percutaneous biopsy through the transjugal-transoval route (of Hartel) may be performed to guide the therapeutic decision.The chapter is based on the authors' experience in 50 patients who underwent the procedure over the ten past years. There was no mortality and only little (mostly transient) morbidity. Pathological diagnosis accuracy of the method revealed good, with a sensitivity of 0.83 and a specificity of 1.In the chapter the authors first recall the surgical anatomy background from personal laboratory dissections. They then describe the technical procedure, as well as the tissue harvesting method. Finally they define indications together with the decision-making process.Due to the constraint trajectory of the biopsy needle inserted through the Foramen Ovale, accessible lesions are only those located in the Meckel trigeminal Cave, the posterior sector of the cavernous sinus compartment, and the upper part of the petroclival region.The authors advise to perform this percutaneous biopsy method when imaging does not provide sufficient evidence of the pathological nature of the lesion, for therapeutic decision. Goal is to avoid unnecessary open surgery or radiosurgery, also inappropriate chemo-/radio-therapy.
Resumo:
In distributed energy production, permanent magnet synchronous generators (PMSG) are often connected to the grid via frequency converters, such as voltage source line converters. The price of the converter may constitute a large part of the costs of a generating set. Some of the permanent magnet synchronous generators with converters and traditional separately excited synchronous generators couldbe replaced by direct-on-line (DOL) non-controlled PMSGs. Small directly networkconnected generators are likely to have large markets in the area of distributed electric energy generation. Typical prime movers could be windmills, watermills and internal combustion engines. DOL PMSGs could also be applied in island networks, such as ships and oil platforms. Also various back-up power generating systems could be carried out with DOL PMSGs. The benefits would be a lower priceof the generating set and the robustness and easy use of the system. The performance of DOL PMSGs is analyzed. The electricity distribution companies have regulations that constrain the design of the generators being connected to the grid. The general guidelines and recommendations are applied in the analysis. By analyzing the results produced by the simulation model for the permanent magnet machine, the guidelines for efficient damper winding parameters for DOL PMSGs are presented. The simulation model is used to simulate grid connections and load transients. The damper winding parameters are calculated by the finite element method (FEM) and determined from experimental measurements. Three-dimensional finite element analysis (3D FEA) is carried out. The results from the simulation model and 3D FEA are compared with practical measurements from two prototype axial flux permanent magnet generators provided with damper windings. The dimensioning of the damper winding parameters is case specific. The damper winding should be dimensioned based on the moment of inertia of the generating set. It is shown that the damper winding has optimal values to reach synchronous operation in the shortest period of time after transient operation. With optimal dimensioning, interferenceon the grid is minimized.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
The purpose of this thesis is to analyse activity-based costing (ABC) and possible modified versions ofit in engineering design context. The design engineers need cost information attheir decision-making level and the cost information should also have a strong future orientation. These demands are high because traditional management accounting has concentrated on the direct actual costs of the products. However, cost accounting has progressed as ABC was introduced late 1980s and adopted widely bycompanies in the 1990s. The ABC has been a success, but it has gained also criticism. In some cases the ambitious ABC systems have become too complex to build,use and update. This study can be called an action-oriented case study with some normative features. In this thesis theoretical concepts are assessed and allowed to unfold gradually through interaction with data from three cases. The theoretical starting points are ABC and theory of engineering design process (chapter2). Concepts and research results from these theoretical approaches are summarized in two hypotheses (chapter 2.3). The hypotheses are analysed with two cases (chapter 3). After the two case analyses, the ABC part is extended to cover alsoother modern cost accounting methods, e.g. process costing and feature costing (chapter 4.1). The ideas from this second theoretical part are operationalized with the third case (chapter 4.2). The knowledge from the theory and three cases is summarized in the created framework (chapter 4.3). With the created frameworkit is possible to analyse ABC and its modifications in the engineering design context. The framework collects the factors that guide the choice of the costing method to be used in engineering design. It also illuminates the contents of various ABC-related costing methods. However, the framework needs to be further tested. On the basis of the three cases it can be said that ABC should be used cautiously when formulating cost information for engineering design. It is suitable when the manufacturing can be considered simple, or when the design engineers are not cost conscious, and in the beginning of the design process when doing adaptive or variant design. If the design engineers need cost information for the embodiment or detailed design, or if manufacturing can be considered complex, or when design engineers are cost conscious, the ABC has to be always evaluated critically.
Resumo:
Background: Coxiella burnetii is a highly clonal microorganism which is difficult to culture, requiring BSL3 conditions for its propagation. This leads to a scarce availability of isolates worldwide. On the other hand, published methods of characterization have delineated up to 8 different genomic groups and 36 genotypes. However, all these methodologies, with the exception of one that exhibited limited discriminatory power (3 genotypes), rely on performing between 10 and 20 PCR amplifications or sequencing long fragments of DNA, which make their direct application to clinical samples impracticable and leads to a scarce accessibility of data on the circulation of C. burnetii genotypes. Results: To assess the variability of this organism in Spain, we have developed a novel method that consists of a multiplex (8 targets) PCR and hybridization with specific probes that reproduce the previous classification of this organism into 8 genomic groups, and up to 16 genotypes. It allows for a direct haracterization from clinical and environmental samples in a single run, which will help in the study of the different genotypes circulating in wild and domestic cycles as well as from sporadic human cases and outbreaks. The method has been validated with reference isolates. A high variability of C. burnetii has been found in Spain among 90 samples tested, detecting 10 different genotypes, being those adaA negative associated with acute Q fever cases presenting as fever of intermediate duration with liver involvement and with chronic cases. Genotypes infecting humans are also found in sheep, goats, rats, wild boar and ticks, and the only genotype found in cattle has never been found among our clinical samples. Conclusions: This newly developed methodology has permitted to demonstrate that C. burnetii is highly variable in Spain. With the data presented here, cattle seem not to participate in the transmission of C. burnetii to humans in the samples studied, while sheep, goats, wild boar, rats and ticks share genotypes with the human population.
Resumo:
OBJECTIVES: This is the first meta-analysis on the efficacy of composite resin restorations in anterior teeth. The objective of the present meta-analysis was to verify whether specific material classes, tooth conditioning methods and operational procedures influence the result for Class III and Class IV restorations. MATERIAL AND METHODS: The database SCOPUS and PubMed were searched for clinical trials on anterior resin composites without restricting the search to the year of publication. The inclusion criteria were: (1) prospective clinical trial with at least 2 years of observation; (2) minimal number of restorations at last recall=20; (3) report on drop-out rate; (4) report of operative technique and materials used in the trial, and (5) utilization of Ryge or modified Ryge evaluation criteria. For the statistical analysis, a linear mixed model was used with random effects to account for the heterogeneity between the studies. p-Values smaller than 0.05 were considered to be significant. RESULTS: Of the 84 clinical trials, 21 studies met the inclusion criteria, 14 of them for Class III restorations, 6 for Class IV restorations and 1 for closure of diastemata; the latter was included in the Class IV group. Twelve of the 21 studies started before 1991 and 18 before 2001. The estimated median overall success rate (without replacement) after 10 years for Class III composite resin restorations was 95% and for Class IV restorations 90%. The main reason for the replacement of Class IV restorations was bulk fractures, which occurred significantly more frequently with microfilled composites than with hybrid and macrofilled composites. Caries adjacent to restorations was infrequent in most studies and accounted only for about 2.5% of all replaced restorations after 10 years irrespective of the cavity class. Class III restorations with glass ionomer derivates suffered significantly more loss of anatomical form than did fillings with other types of material. When the enamel was acid-etched and no bonding agent was applied, significantly more restorations showed marginal staining and detectable margins compared to enamel etching with enamel bonding or the total etch technique; fillings with self-etching systems were in between of these two outcome variables. Bevelling of the enamel was associated with a significantly reduced deterioration of the anatomical form compared to no bevelling but not with less marginal staining or less detectable margins. The type of isolation (absolute/relative) had a statistically significant influence on marginal caries which, however, might be a random finding.
Resumo:
Työn tarkoituksena oli kerätä käyttövarmuustietoa savukaasulinjasta kahdelta suomalaiselta sellutehtaalta niiden käyttöönotosta aina tähän päivään asti. Käyttövarmuustieto koostuu luotettavuustiedoista sekä kunnossapitotiedoista. Kerätyn tiedon avulla on mahdollista kuvata tarkasti laitoksen käyttövarmuutta seuraavilla tunnusluvuilla: suunnittelemattomien häiriöiden lukumäärä ja korjausajat, laitteiden seisokkiaika, vikojen todennäköisyys ja korjaavan kunnossapidon kustannukset suhteessa savukaasulinjan korjaavan kunnossapidon kokonaiskustannuksiin. Käyttövarmuustiedon keräysmetodi on esitelty. Savukaasulinjan kriittisten laitteiden määrittelyyn käytetty metodi on yhdistelmä kyselytutkimuksesta ja muunnellusta vian vaikutus- ja kriittisyysanalyysistä. Laitteiden valitsemiskriteerit lopulliseen kriittisyysanalyysiin päätettiin käyttövarmuustietojen sekä kyselytutkimuksen perusteella. Kriittisten laitteiden määrittämisen tarkoitus on löytää savukaasulinjasta ne laitteet, joiden odottamaton vikaantuminen aiheuttaa vakavimmat seuraukset savukaasulinjan luotettavuuteen, tuotantoon, turvallisuuteen, päästöihin ja kustannuksiin. Tiedon avulla rajoitetut kunnossapidon resurssit voidaan suunnata oikein. Kriittisten laitteiden määrittämisen tuloksena todetaan, että kolme kriittisintä laitetta savukaasulinjassa ovat molemmille sellutehtaille yhteisesti: savukaasupuhaltimet, laahakuljettimet sekä ketjukuljettimet. Käyttövarmuustieto osoittaa, että laitteiden luotettavuus on tehdaskohtaista, mutta periaatteessa samat päälinjat voidaan nähdä suunnittelemattomien vikojen todennäköisyyttä esittävissä kuvissa. Kustannukset, jotka esitetään laitteen suunnittelemattomien kunnossapitokustannusten suhteena savukaasulinjan kokonaiskustannuksiin, noudattelevat hyvin pitkälle luotettavuuskäyrää, joka on laskettu laitteen seisokkiajan suhteena käyttötunteihin. Käyttövarmuustiedon keräys yhdistettynä kriittisten laitteiden määrittämiseen mahdollistavat ennakoivan kunnossapidon oikean kohdistamisen ja ajoittamisen laitteiston elinaikana siten, että luotettavuus- ja kustannustehokkuusvaatimukset saavutetaan.
Resumo:
Background: TILLING (Targeting Induced Local Lesions IN Genomes) is a reverse genetic method that combines chemical mutagenesis with high-throughput genome-wide screening for point mutation detection in genes of interest. However, this mutation discovery approach faces a particular problem which is how to obtain a mutant population with a sufficiently high mutation density. Furthermore, plant mutagenesis protocols require two successive generations (M1, M2) for mutation fixation to occur before the analysis of the genotype can begin. Results: Here, we describe a new TILLING approach for rice based on ethyl methanesulfonate (EMS) mutagenesis of mature seed-derived calli and direct screening of in vitro regenerated plants. A high mutagenesis rate was obtained (i.e. one mutation in every 451 Kb) when plants were screened for two senescence-related genes. Screening was carried out in 2400 individuals from a mutant population of 6912. Seven sense change mutations out of 15 point mutations were identified. Conclusions: This new strategy represents a significant advantage in terms of time-savings (i.e. more than eight months), greenhouse space and work during the generation of mutant plant populations. Furthermore, this effective chemical mutagenesis protocol ensures high mutagenesis rates thereby saving in waste removal costs and the total amount of mutagen needed thanks to the mutagenesis volume reduction.
Resumo:
PURPOSE: Postmortem computed tomography angiography (PMCTA) was introduced into forensic investigations a few years ago. It provides reliable images that can be consulted at any time. Conventional autopsy remains the reference standard for defining the cause of death, but provides only limited possibility of a second examination. This study compares these two procedures and discusses findings that can be detected exclusively using each method. MATERIALS AND METHODS: This retrospective study compared radiological reports from PMCTA to reports from conventional autopsy for 50 forensic autopsy cases. Reported findings from autopsy and PMCTA were extracted and compared to each other. PMCTA was performed using a modified heart-lung machine and the oily contrast agent Angiofil® (Fumedica AG, Muri, Switzerland). RESULTS: PMCTA and conventional autopsy would have drawn similar conclusions regarding causes of death. Nearly 60 % of all findings were visualized with both techniques. PMCTA demonstrates a higher sensitivity for identifying skeletal and vascular lesions. However, vascular occlusions due to postmortem blood clots could be falsely assumed to be vascular lesions. In contrast, conventional autopsy does not detect all bone fractures or the exact source of bleeding. Conventional autopsy provides important information about organ morphology and remains the only way to diagnose a vital vascular occlusion with certitude. CONCLUSION: Overall, PMCTA and conventional autopsy provide comparable findings. However, each technique presents advantages and disadvantages for detecting specific findings. To correctly interpret findings and clearly define the indications for PMCTA, these differences must be understood.
Resumo:
Identification of CD8+ cytotoxic T lymphocyte (CTL) epitopes has traditionally relied upon testing of overlapping peptide libraries for their reactivity with T cells in vitro. Here, we pursued deep ligand sequencing (DLS) as an alternative method of directly identifying those ligands that are epitopes presented to CTLs by the class I human leukocyte antigens (HLA) of infected cells. Soluble class I HLA-A*11:01 (sHLA) was gathered from HIV-1 NL4-3-infected human CD4+ SUP-T1 cells. HLA-A*11:01 harvested from infected cells was immunoaffinity purified and acid boiled to release heavy and light chains from peptide ligands that were then recovered by size-exclusion filtration. The ligands were first fractionated by high-pH high-pressure liquid chromatography and then subjected to separation by nano-liquid chromatography (nano-LC)–mass spectrometry (MS) at low pH. Approximately 10 million ions were selected for sequencing by tandem mass spectrometry (MS/MS). HLA-A*11:01 ligand sequences were determined with PEAKS software and confirmed by comparison to spectra generated from synthetic peptides. DLS identified 42 viral ligands presented by HLA-A*11:01, and 37 of these were previously undetected. These data demonstrate that (i) HIV-1 Gag and Nef are extensively sampled, (ii) ligand length variants are prevalent, particularly within Gag and Nef hot spots where ligand sequences overlap, (iii) noncanonical ligands are T cell reactive, and (iv) HIV-1 ligands are derived from de novo synthesis rather than endocytic sampling. Next-generation immunotherapies must factor these nascent HIV-1 ligand length variants and the finding that CTL-reactive epitopes may be absent during infection of CD4+ T cells into strategies designed to enhance T cell immunity.
Resumo:
Ultra-trace amounts of Cu(II) were separated and preconcentrated by solid phase extraction on octadecyl-bonded silica membrane disks modified with a new Schiff,s base (Bis- (2-Hydroxyacetophenone) -2,2-dimethyl-1,3-propanediimine) (SBTD) followed by elution and inductively coupled plasma atomic emission spectrometric detection. The method was applied as a separation and detection method for copper(II) in environmental and biological samples. Extraction efficiency and the influence of sample matrix, flow rate, pH, and type and minimum amount of stripping acid were investigated. The concentration factor and detection limit of the proposed method are 500 and 12.5 pg mL-1, respectively.