982 resultados para Analytical methods validate
Resumo:
To check the effectiveness of campaigns preventing drug abuse or indicating local effects of efforts against drug trafficking, it is beneficial to know consumed amounts of substances in a high spatial and temporal resolution. The analysis of drugs of abuse in wastewater (WW) has the potential to provide this information. In this study, the reliability of WW drug consumption estimates is assessed and a novel method presented to calculate the total uncertainty in observed WW cocaine (COC) and benzoylecgonine (BE) loads. Specifically, uncertainties resulting from discharge measurements, chemical analysis and the applied sampling scheme were addressed and three approaches presented. These consist of (i) a generic model-based procedure to investigate the influence of the sampling scheme on the uncertainty of observed or expected drug loads, (ii) a comparative analysis of two analytical methods (high performance liquid chromatography-tandem mass spectrometry and gas chromatography-mass spectrometry), including an extended cross-validation by influent profiling over several days, and (iii) monitoring COC and BE concentrations in WW of the largest Swiss sewage treatment plants. In addition, the COC and BE loads observed in the sewage treatment plant of the city of Berne were used to back-calculate the COC consumption. The estimated mean daily consumed amount was 107 ± 21 g of pure COC, corresponding to 321 g of street-grade COC.
Resumo:
Concern over possible adverse effects of endocrine-disrupting compounds on fish has caused the development of appropriate testing methods. In vitro screening assays may provide initial information on endocrine activities of a test compound and thereby may direct and optimize subsequent testing. Induction of vitellogenin (VTG) is used as a biomarker of exposure of fish to estrogen-active substances. Since VTG induction can be measured not only in vivo but also in fish hepatocytes in vitro, the use of VTG induction response in isolated fish liver cells has been suggested as in vitro screen for identifying estrogenic-active substances. The main advantages of the hepatocyte VTG assay are considered its ability to detect effects of estrogenic metabolites, since hepatocytes in vitro remain metabolically competent, and its ability to detect both estrogenic and anti-estrogenic effects. In this article, we critically review the current knowledge on the VTG response of cultured fish hepatocytes to (anti)estrogenic substances. In particular, we discuss the sensitivity, specificity, and variability of the VTG hepatocyte assay. In addition, we review the available data on culture factors influencing basal and induced VTG production, the response to natural and synthetic estrogens as well as to xenoestrogens, the detection of indirect estrogens, and the sources of assay variability. The VTG induction in cultured fish hepatocytes is clearly influenced by culture conditions (medium composition, temperature, etc.) and culture system (hepatocyte monolayers, aggregates, liver slices, etc.). The currently available database on estrogen-mediated VTG induction in cultured teleost hepatocytes is too small to support conclusive statements on whether there exist systematic differences of the VTG response between in vitro culture systems, VTG analytical methods or fish species. The VTG hepatocyte assay detects sensitively natural and synthetic estrogens, whereas the response to xenoestrogens appears to be more variable. The detection of weak estrogens can be critical due to the overshadow with cytotoxic concentrations. Moreover, the VTG hepatocyte assay is able to detect antiestrogens as well as indirect estrogens, i.e substances which require metabolic activation to induce an estrogenic response. Nevertheless, more chemicals need to be analysed to corroborate this statement. It will be necessary to establish standardized protocols to minimize assay variability, and to develop a set of pass-fail criteria as well as cut-offs for designating positive and negative responses.
Resumo:
Lightmicroscopical (LM) and electron microscopi cal (EM) techniques, have had a major influence on the development and direction of cell biology, and particularly also on the investigation of complex host-parasite relationships. Earlier, microscopy has been rather descriptive, but new technical and scientific advances have changed the situation. Microscopy has now become analytical, quantitative and three-dimensional, with greater emphasis on analysis of live cells with fluorescent markers. The new or improved techniques that have become available include immunocytochemistry using immunogold labeling techniques or fluorescent probes, cryopreservation and cryosectioning, in situ hybridization, fluorescent reporters for subcellular localization, micro-analytical methods for elemental distribution, confocal laser scanning microscopy, scanning tunneling microscopy and live-imaging. Taken together, these tools are providing both researchers and students with a novel and multidimensional view of the intricate biological processes during parasite development in the host.
Resumo:
Students are now involved in a vastly different textual landscape than many English scholars, one that relies on the “reading” and interpretation of multiple channels of simultaneous information. As a response to these new kinds of literate practices, my dissertation adds to the growing body of research on multimodal literacies, narratology in new media, and rhetoric through an examination of the place of video games in English teaching and research. I describe in this dissertation a hybridized theoretical basis for incorporating video games in English classrooms. This framework for textual analysis includes elements from narrative theory in literary study, rhetorical theory, and literacy theory, and when combined to account for the multiple modalities and complexities of gaming, can provide new insights about those theories and practices across all kinds of media, whether in written texts, films, or video games. In creating this framework, I hope to encourage students to view texts from a meta-level perspective, encompassing textual construction, use, and interpretation. In order to foster meta-level learning in an English course, I use specific theoretical frameworks from the fields of literary studies, narratology, film theory, aural theory, reader-response criticism, game studies, and multiliteracies theory to analyze a particular video game: World of Goo. These theoretical frameworks inform pedagogical practices used in the classroom for textual analysis of multiple media. Examining a video game from these perspectives, I use analytical methods from each, including close reading, explication, textual analysis, and individual elements of multiliteracies theory and pedagogy. In undertaking an in-depth analysis of World of Goo, I demonstrate the possibilities for classroom instruction with a complex blend of theories and pedagogies in English courses. This blend of theories and practices is meant to foster literacy learning across media, helping students develop metaknowledge of their own literate practices in multiple modes. Finally, I outline a design for a multiliteracies course that would allow English scholars to use video games along with other texts to interrogate texts as systems of information. In doing so, students can hopefully view and transform systems in their own lives as audiences, citizens, and workers.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
The estimation of the average travel distance in a low-level picker-to-part order picking system can be done by analytical methods in most cases. Often a uniform distribution of the access frequency over all bin locations is assumed in the storage system. This only applies if the bin location assignment is done randomly. If the access frequency of the articles is considered in the bin location assignment to reduce the average total travel distance of the picker, the access frequency over the bin locations of one aisle can be approximated by an exponential density function or any similar density function. All known calculation methods assume that the average number of orderlines per order is greater than the number of aisles of the storage system. In case of small orders this assumption is often invalid. This paper shows a new approach for calculating the average total travel distance taking into account that the average number of orderlines per order is lower than the total number of aisles in the storage system and the access frequency over the bin locations of an aisle can be approximated by any density function.
Resumo:
Standard protocols are given for assessing metabolic stability in rainbow trout using the liver S9 fraction. These protocols describe the isolation of S9 fractions from trout livers, evaluation of metabolic stability using a substrate depletion approach, and expression of the result as in vivo intrinsic clearance. Additional guidance is provided on the care and handling of test animals, design and interpretation of preliminary studies, and development of analytical methods. Although initially developed to predict metabolism impacts on chemical accumulation by fish, these procedures can be used to support a broad range of scientific and risk assessment activities including evaluation of emerging chemical contaminants and improved interpretation of toxicity testing results. These protocols have been designed for rainbow trout and can be adapted to other species as long as species-specific considerations are modified accordingly (e.g., fish maintenance and incubation mixture temperature). Rainbow trout is a cold-water species. Protocols for other species (e.g., carp, a warm-water species) can be developed based on these procedures as long as the specific considerations are taken into account.
Resumo:
The identification of plausible causes for water body status deterioration will be much easier if it can build on available, reliable, extensive and comprehensive biogeochemical monitoring data (preferably aggregated in a database). A plausible identification of such causes is a prerequisite for well-informed decisions on which mitigation or remediation measures to take. In this chapter, first a rationale for an extended monitoring programme is provided; it is then compared to the one required by the Water Framework Directive (WFD). This proposal includes a list of relevant parameters that are needed for an integrated, a priori status assessment. Secondly, a few sophisticated statistical tools are described that subsequently allow for the estiation of the magnitude of impairment as well as the likely relative importance of different stressors in a multiple stressed environment. The advantages and restrictions of these rather complicated analytical methods are discussed. Finally, the use of Decision Support Systems (DSS) is advocated with regard to the specific WFD implementation requirements.
Resumo:
Oxygenated polycyclic aromatic hydrocarbons (oxy-PAHs) and nitrogen heterocyclic polycyclic aromatic compounds (N-PACs) are toxic, highly leachable and often abundant at sites that are also contaminated with PAHs. However, due to lack of regulations and standardized methods for their analysis, they are seldom included in monitoring and risk-assessment programs. This intercomparison study constitutes an important step in the harmonization of the analytical methods currently used, and may also be considered a first step towards the certification of reference materials for these compounds. The results showed that the participants were able to determine oxy-PAHs with accuracy similar to PAHs, with average determined mass fractions agreeing well with the known levels in a spiked soil and acceptable inter- and intra-laboratory precisions for all soils analyzed. For the N-PACs, the results were less satisfactory, and have to be improved by using analytical methods more specifically optimized for these compounds.
Resumo:
For early diagnosis and therapy of alcohol-related disorders, alcohol biomarkers are highly valuable. Concerning specificity, indirect markers can be influenced by nonethanol-related factors, whereas direct markers are only formed after ethanol consumption. Sensitivity of the direct markers depends on cutoffs of analytical methods, material for analysis and plays an important role for their utilization in different fields of application. Until recently, the biomarker phosphatidylethanol has been used to differentiate between social drinking and alcohol abuse. After method optimization, the detection limit could be lowered and phosphatidylethanol became sensitive enough to even detect the consumption of low amounts of alcohol. This perspective gives a summary of most common alcohol biomarkers and summarizes new developments for monitoring alcohol consumption habits.
Resumo:
In a network of competing species, a competitive intransitivity occurs when the ranking of competitive abilities does not follow a linear hierarchy (A > B > C but C > A). A variety of mathematical models suggests that intransitive networks can prevent or slow down competitive exclusion and maintain biodiversity by enhancing species coexistence. However, it has been difficult to assess empirically the relative importance of intransitive competition because a large number of pairwise species competition experiments are needed to construct a competition matrix that is used to parameterize existing models. Here we introduce a statistical framework for evaluating the contribution of intransitivity to community structure using species abundance matrices that are commonly generated from replicated sampling of species assemblages. We provide metrics and analytical methods for using abundance matrices to estimate species competition and patch transition matrices by using reverse-engineering and a colonization-competition model. These matrices provide complementary metrics to estimate the degree of intransitivity in the competition network of the sampled communities. Benchmark tests reveal that the proposed methods could successfully detect intransitive competition networks, even in the absence of direct measures of pairwise competitive strength. To illustrate the approach, we analyzed patterns of abundance and biomass of five species of necrophagous Diptera and eight species of their hymenopteran parasitoids that co-occur in beech forests in Germany. We found evidence for a strong competitive hierarchy within communities of flies and parasitoids. However, for parasitoids, there was a tendency towards increasing intransitivity in higher weight classes, which represented larger resource patches. These tests provide novel methods for empirically estimating the degree of intransitivity in competitive networks from observational datasets. They can be applied to experimental measures of pairwise species interactions, as well as to spatio-temporal samples of assemblages in homogenous environments or environmental gradients.
Resumo:
A new series of cationic dinuclear arene ruthenium complexes bridged by three thiophenolato ligands, [(η6-arene)2Ru2(μ2-SR)3]+ with arene = indane, R = met: 1 (met = 4-methylphenyl); R = mco: 4 (mco = 4-methylcoumarin-7-yl); arene = biphenyl, R = met: 2; R = mco: 5; arene = 1,2,3,4-tetrahydronaphthalene, R = met: 3; R = mco: 6, have been prepared from the reaction of the neutral precursor [(η6-arene)Ru(μ2-Cl)Cl]2 and the corresponding thiophenol RSH. All cationic complexes have been isolated as chloride salts and fully characterized by spectroscopic and analytical methods. The molecular structure of 1, solved by X-ray structure analysis of a single crystal of the chloride salt, shows the two ruthenium atoms adopting a pseudo-octahedral geometry without metal–metal bond in accordance with the noble gas rule. All complexes are stable in H2O at 37 °C, but only 1 remains soluble in a 100 mM aqueous NaCl solution, while significant percentages (30–60 %) of 2–6 precipitate as chloride salts under these conditions. The 4-methylphenylthiolato complexes (R = met) are highly cytotoxic towards human ovarian cancer cells, the IC50 values being in the sub-micromolar range, while the 4-methylcoumarin-7-yl thiolato complexes (R = mco) are only slightly cytotoxic. Complexes 1 and 3 show the highest in vitro anticancer activity with IC50 values inferior to 0.06 μM for the A2780 cell line. The results demonstrate that the arene ligand is an important parameter that should be more systematically evaluated when designing new half-sandwich organometallic complexes.
Resumo:
The reactions of 4,4′-bipyridine with selected trinuclear triangular copper(II) complexes, [Cu3(μ3-OH)(μ-pz)3(RCOO)2Lx], [pz = pyrazolate anion, R = CH3(CH2)n (2 ≤ n ≤ 5); L = H2O, MeOH, EtOH] yielded a series of 1D coordination polymers (CPs) based on the repetition of [Cu3(μ3-OH)(μ-pz)3] secondary building units joined by bipyridine. The CPs were characterized by conventional analytical methods (elemental analyses, ESI-MS, IR spectra) and single crystal XRD determinations. An unprecedented 1D CP, generated through the bipyridine bridging hexanuclear copper clusters moieties, two 1D CPs presenting structural analogies, and two monodimensional tapes having almost exactly superimposable structures, were obtained. In one case, the crystal packing makes evident the presence of small, not-connected pores, accounting for ca. 6% of free cell volume.
Resumo:
The long-lived radionuclide 129I (T 1/2 = 15.7 My) occurs in the nature in very low concentrations. Since the middle of our century the environmental levels of 129I have been dramatically changed as a consequence of civil and military use of nuclear fission. Its investigation in environmental materials is of interest for environmental surveillance, retrospective dosimetry and for the use as a natural and man-made fracers of environmental processes. We are comparing two analytical methods which presently are capable of determining 129I in environmental materials, namely radiochemical neutron activation analysis (RNAA) and accelerator mass spectrometry (AMS). Emphasis is laid upon the quality control and detection capabilities for the analysis of 129I in environmental materials. Some applications are discussed.
Resumo:
Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^