36 resultados para Amount of substance
em Instituto Politécnico do Porto, Portugal
Resumo:
Scientific evidence has shown an association between organochlorine compounds (OCC) exposure and human health hazards. Concerning this, OCC detection in human adipose samples has to be considered a public health priority. This study evaluated the efficacy of various solid-phase extraction (SPE) and cleanup methods for OCC determination in human adipose tissue. Octadecylsilyl endcapped (C18-E), benzenesulfonic acid modified silica cation exchanger (SA), poly (styrene-divinylbenzene (EN) and EN/RP18 SPE sorbents were evaluated. The relative sample cleanup provided by these SPE columns was evaluated using gas chromatography with electron capture detection (GC–ECD). The C18-E columns with strong homogenization were found to provide the most effective cleanup, removing the greatest amount of interfering substance, and simultaneously ensuring good analyte recoveries higher than 70%. Recoveries>70% with standard deviations (SD)<15% were obtained for all compounds under the selected conditions. Method detection limits were in the 0.003–0.009 mg/kg range. The positive samples were confirmed by gas chromatography coupled with tandem mass spectrometry (GC-MS/MS). The highest percentage found of the OCC in real samples corresponded to HCB, o,p′-DDT and methoxychlor, which were detected in 80 and 95% of samples analyzed respectively. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
An increasing amount of research is being developed in the area where technology and humans meet. The success or failure of technologies and the question whether technology helps humans to fulfill their goals or whether it hinders them is in most cases not a technical one. User Perception and Influencing Factors of Technology in Everyday Life addresses issues of human and technology interaction. The research in this work is interdisciplinary, ranging from more technical subjects such as computer science, engineering, and information systems, to non-technical descriptions of technology and human interaction from the point of view of sociology or philosophy. This book is perfect for academics, researchers, and professionals alike as it presents a set of theories that allow us to understand the interaction of technology and humans and to put it to practical use.
Resumo:
Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
The introduction of electricity markets and integration of Distributed Generation (DG) have been influencing the power system’s structure change. Recently, the smart grid concept has been introduced, to guarantee a more efficient operation of the power system using the advantages of this new paradigm. Basically, a smart grid is a structure that integrates different players, considering constant communication between them to improve power system operation and management. One of the players revealing a big importance in this context is the Virtual Power Player (VPP). In the transportation sector the Electric Vehicle (EV) is arising as an alternative to conventional vehicles propel by fossil fuels. The power system can benefit from this massive introduction of EVs, taking advantage on EVs’ ability to connect to the electric network to charge, and on the future expectation of EVs ability to discharge to the network using the Vehicle-to-Grid (V2G) capacity. This thesis proposes alternative strategies to control these two EV modes with the objective of enhancing the management of the power system. Moreover, power system must ensure the trips of EVs that will be connected to the electric network. The EV user specifies a certain amount of energy that will be necessary to charge, in order to ensure the distance to travel. The introduction of EVs in the power system turns the Energy Resource Management (ERM) under a smart grid environment, into a complex problem that can take several minutes or hours to reach the optimal solution. Adequate optimization techniques are required to accommodate this kind of complexity while solving the ERM problem in a reasonable execution time. This thesis presents a tool that solves the ERM considering the intensive use of EVs in the smart grid context. The objective is to obtain the minimum cost of ERM considering: the operation cost of DG, the cost of the energy acquired to external suppliers, the EV users payments and remuneration and penalty costs. This tool is directed to VPPs that manage specific network areas, where a high penetration level of EVs is expected to be connected in these areas. The ERM is solved using two methodologies: the adaptation of a deterministic technique proposed in a previous work, and the adaptation of the Simulated Annealing (SA) technique. With the purpose of improving the SA performance for this case, three heuristics are additionally proposed, taking advantage on the particularities and specificities of an ERM with these characteristics. A set of case studies are presented in this thesis, considering a 32 bus distribution network and up to 3000 EVs. The first case study solves the scheduling without considering EVs, to be used as a reference case for comparisons with the proposed approaches. The second case study evaluates the complexity of the ERM with the integration of EVs. The third case study evaluates the performance of scheduling with different control modes for EVs. These control modes, combined with the proposed SA approach and with the developed heuristics, aim at improving the quality of the ERM, while reducing drastically its execution time. The proposed control modes are: uncoordinated charging, smart charging and V2G capability. The fourth and final case study presents the ERM approach applied to consecutive days.
Resumo:
Molecularly imprinted polymers (MIP) were used as potentiometric sensors for the selective recognition and determination of chlormequat (CMQ). They were produced after radical polymerization of 4-vinyl pyridine (4-VP) or methacrylic acid (MAA) monomers in the presence of a cross-linker. CMQwas used as template. Similar nonimprinted (NI) polymers (NIP) were produced by removing the template from reaction media. The effect of kind and amount of MIP or NIP sensors on the potentiometric behavior was investigated. Main analytical features were evaluated in steady and flow modes of operation. The sensor MIP/4-VP exhibited the best performance, presenting fast near-Nernstian response for CMQover the concentration range 6.2×10-6 – 1.0×10-2 mol L-1 with detection limits of 4.1×10-6 mol L-1. The sensor was independent from the pH of test solutions in the range 5 – 10. Potentiometric selectivity coefficients of the proposed sensors were evaluated over several inorganic and organic cations. Results pointed out a good selectivity to CMQ. The sensor was applied to the potentiometric determination of CMQin commercial phytopharmaceuticals and spiked water samples. Recoveries ranged 96 to 108.5%.
Resumo:
Studies were undertaken to determine the adsorption behavior of α-cypermethrin [R)-α-cyano-3-phenoxybenzyl(1S)-cis- 3-(2,2-dichlorovinyl)-2,2-dimethylcyclopropanecarboxylate, and (S)-α-cyano-3-phenoxybenzyl (1R)-cis-3-(2,2-dichlorovinyl)-2,2- dimethylcyclopropanecarboxylate] in solutions on granules of cork and activated carbon (GAC). The adsorption studies were carried out using a batch equilibrium technique. A gas chromatograph with an electron capture detector (GC-ECD) was used to analyze α-cypermethrin after solid phase extraction with C18 disks. Physical properties including real density, pore volume, surface area and pore diameter of cork were evaluated by mercury porosimetry. Characterization of cork particles showed variations thereby indicating the highly heterogeneous structure of the material. The average surface area of cork particles was lower than that of GAC. Kinetics adsorption studies allowed the determination of the equilibrium time—24 hours for both cork (1–2 mm and 3–4 mm) and GAC. For the studied α-cypermethrin concentration range, GAC revealed to be a better sorbent. However, adsorption parameters for equilibrium concentrations, obtained through the Langmuir and Freundlich models, showed that granulated cork 1–2 mm have the maximum amount of adsorbed α-cypermethrin (qm) (303 μg/g); followed by GAC (186 μg/g) and cork 3-4 mm (136 μg/g). The standard deviation (SD) values, demonstrate that Freundlich model better describes the α-cypermethrin adsorption phenomena on GAC, while α-cypermethrin adsorption on cork (1-2 mm and 3-4 mm) is better described by the Langmuir. In view of the adsorption results obtained in this study it appears that granulated cork may be a better and a cheaper alternative to GAC for removing α-cypermethrin from water.
Resumo:
The current models are not simple enough to allow a quick estimation of the remediation time. This work reports the development of an easy and relatively rapid procedure for the forecasting of the remediation time using vapour extraction. Sandy soils contaminated with cyclohexane and prepared with different water contents were studied. The remediation times estimated through the mathematical fitting of experimental results were compared with those of real soils. The main objectives were: (i) to predict, through a simple mathematical fitting, the remediation time of soils with water contents different from those used in the experiments; (ii) to analyse the influence of soil water content on the: (ii1) remediation time; (ii2) remediation efficiency; and (ii3) distribution of contaminants in the different phases present into the soil matrix after the remediation process. For sandy soils with negligible contents of clay and natural organic matter, artificially contaminated with cyclohexane before vapour extraction, it was concluded that (i) if the soil water content belonged to the range considered in the experiments with the prepared soils, then the remediation time of real soils of similar characteristics could be successfully predicted, with relative differences not higher than 10%, through a simple mathematical fitting of experimental results; (ii) increasing soil water content from 0% to 6% had the following consequences: (ii1) increased remediation time (1.8–4.9 h, respectively); (ii2) decreased remediation efficiency (99–97%, respectively); and (ii3) decreased the amount of contaminant adsorbed onto the soil and in the non-aqueous liquid phase, thus increasing the amount of contaminant in the aqueous and gaseous phases.
Resumo:
A new flow-injection analytical procedure is proposed for the determination of the total amount of polyphenols in wines; the method is based on the formation of a colored complex between 4-aminoantipyrine and phenols, in the presence of an oxidizing reagent. The oxidizing agents hexacyanoferrate(III), peroxodisulfate, and tetroxoiodate(VII) were tested. Batch trials were first performed to select appropriate oxidizing agents, pH, and concentration ratios of reagents, on the basis of their effect on the stability of the colored complex. Conditions selected as a result of these trials were implemented in a flow-injection analytical system in which the influence of injection volume, flow rate, and reaction- coil length, was evaluated. Under the optimum conditions the total amount of polyphenols, expressed as gallic acid, could be determined within a concentration range of 36 to 544 mg L–1, and with a sensitivity of 344 L mol–1 cm–1 and an RSD <1.1%. The reproducibility of analytical readings was indicative of standard deviations <2%. Interference from sugars, tartaric acid, ascorbic acid, methanol, ammonium sulfate, and potassium chloride was negligible. The proposed system was applied to the determination of total polyphenols in red wines, and enabled analysis of approximately 55 samples h–1. Results were usually precise and accurate; the RSD was <3.9% and relative errors, by the Folin–Ciocalteu method, <5.1%.
Resumo:
The objectives of this work were: (1) to identify an isotherm model to relate the contaminant contents in the gas phase with those in the solid and non-aqueous liquid phases; (2) to develop a methodology for the estimation of the contaminant distribution in the different phases of the soil; and (3) to evaluate the influence of soil water content on the contaminant distribution in soil. For sandy soils with negligible contents of clay and natural organic matter, contaminated with benzene, toluene, ethylbenzene, xylene, trichloroethylene (TCE), and perchloroethylene (PCE), it was concluded that: (1) Freundlich’s model showed to be adequate to relate the contaminant contents in the gas phase with those in the solid and non-aqueous liquid phases; (2) the distribution of the contaminants in the different phases present in the soil could be estimated with differences lower than 10% for 83% of the cases; and (3) an increase of the soil water content led to a decrease of the amount of contaminant in the solid and non-aqueous liquid phases, increasing the amount in the other phases.
Resumo:
Copper zinc tin sulfide (CZTS) is a promising Earthabundant thin-film solar cell material; it has an appropriate band gap of ~1.45 eV and a high absorption coefficient. The most efficient CZTS cells tend to be slightly Zn-rich and Cu-poor. However, growing Zn-rich CZTS films can sometimes result in phase decomposition of CZTS into ZnS and Cu2SnS3, which is generally deleterious to solar cell performance. Cubic ZnS is difficult to detect by XRD, due to a similar diffraction pattern. We hypothesize that synchrotron-based extended X-ray absorption fine structure (EXAFS), which is sensitive to local chemical environment, may be able to determine the quantity of ZnS phase in CZTS films by detecting differences in the second-nearest neighbor shell of the Zn atoms. Films of varying stoichiometries, from Zn-rich to Cu-rich (Zn-poor) were examined using the EXAFS technique. Differences in the spectra as a function of Cu/Zn ratio are detected. Linear combination analysis suggests increasing ZnS signal as the CZTS films become more Zn-rich. We demonstrate that the sensitive technique of EXAFS could be used to quantify the amount of ZnS present and provide a guide to crystal growth of highly phase pure films.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
The continuous improvement of Ethernet technologies is boosting the eagerness of extending their use to cover factory-floor distributed real time applications. Indeed, it is remarkable the considerable amount of research work that has been devoted to the timing analysis of Ethernet-based technologies in the past few years. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness in a holistic fashion. To this end, we address an approach, based on simulation, aiming at extracting temporal properties of commercial-off-the-shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is applied to a specific COTS technology, Ethernet/IP. We reason about the modeling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide useful results on timeliness. The approach is part of a wider framework related to the research project INDEPTH NDustrial-Ethernet ProTocols under Holistic analysis.
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
Multicore platforms have transformed parallelism into a main concern. Parallel programming models are being put forward to provide a better approach for application programmers to expose the opportunities for parallelism by pointing out potentially parallel regions within tasks, leaving the actual and dynamic scheduling of these regions onto processors to be performed at runtime, exploiting the maximum amount of parallelism. It is in this context that this paper proposes a scheduling approach that combines the constant-bandwidth server abstraction with a priority-aware work-stealing load balancing scheme which, while ensuring isolation among tasks, enables parallel tasks to be executed on more than one processor at a given time instant.