956 resultados para Applied identity-based encryption
Resumo:
Liquid-core waveguides (LCWs), devices that constrain the emitted radiation minimizing losses during the transport, are an alternative to maximize the amount of detected radiation in luminescence. In this work, the performance of a LCW flow-cell was critically evaluated for chemiluminescence measurements, by using as model the oxidation of luminol by hydrogen peroxide or hypochlorite. An analytical procedure for hypochlorite determination was also developed, with linear response in the range 0.2-3.8 mg/L (2.7-51 mu mol/L), a detection limit estimated as 8 mu g/L (0.64 mu mol/L) at the 99.7% confidence level and luminol consumption of 50 mu g/determination. The coefficients of variation were 3.3% and 1.6% for 0.4 and 1.9 mg/L CIO(-), respectively, with a sampling rate of 164 determinations/h. The procedure was applied to the analysis of Dakin`s solution samples, yielding results in agreement with those obtained by iodometric titration at the 95% confidence level. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
Introduction: Internet users are increasingly using the worldwide web to search for information relating to their health. This situation makes it necessary to create specialized tools capable of supporting users in their searches. Objective: To apply and compare strategies that were developed to investigate the use of the Portuguese version of Medical Subject Headings (MeSH) for constructing an automated classifier for Brazilian Portuguese-language web-based content within or outside of the field of healthcare, focusing on the lay public. Methods: 3658 Brazilian web pages were used to train the classifier and 606 Brazilian web pages were used to validate it. The strategies proposed were constructed using content-based vector methods for text classification, such that Naive Bayes was used for the task of classifying vector patterns with characteristics obtained through the proposed strategies. Results: A strategy named InDeCS was developed specifically to adapt MeSH for the problem that was put forward. This approach achieved better accuracy for this pattern classification task (0.94 sensitivity, specificity and area under the ROC curve). Conclusions: Because of the significant results achieved by InDeCS, this tool has been successfully applied to the Brazilian healthcare search portal known as Busca Saude. Furthermore, it could be shown that MeSH presents important results when used for the task of classifying web-based content focusing on the lay public. It was also possible to show from this study that MeSH was able to map out mutable non-deterministic characteristics of the web. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Glyoxalated soy flour adhesives for wood particleboard added with a much smaller proportion of glyoxalated lignin or tannin and without any addition of either formaldehyde or formaldehyde-based resin are shown to yield results satisfying the relevant standard specifications for interior wood boards. Adhesive resin formulations in which the total content of natural material is either 70 or 80% of the total resin solids content gave good results. The resins comprising 70% by weight of natural material can be used in a much lower proportion on wood chips and can afford pressing times fast enough to be significant under industrial panel pressing conditions. The best formulation of all the ones tried was the one based on glyoxalated precooked soy flour (SG), to which a condensed tannin was added in water solution and a polymeric isocyanate (pMDI), where the proportions of the components SG/T/pMDI was 54/16/30 by weight. (C) 2008 Wiley Periodicals, Inc.
Resumo:
Cheese whey (CW) and deproteinised cheese whey (DCW) were investigated for their suitability as novel substrates for the production of kefir-like beverages. Lactose consumption, ethanol production, as well as organic acids and volatile compounds formation, were determined during CW and DCW fermentation by kefir grains and compared with values obtained during the production of traditional milk kefir. The results showed that kefir grains were able to utilise lactose from CW and DCW and produce similar amounts of ethanol (7.8-8.3 g/l), lactic acid (5.0 g/l) and acetic acid (0.7 g/l) to those obtained during milk fermentation. In addition, the concentration of higher alcohols (2-methyl-1-butanol, 3-methyl-1-butanol, 1-hexanol, 2-methyl-1-propanol, and 1-propanol), ester (ethyl acetate) and aldehyde (acetaldehyde) in cheese whey-based kefir and milk kefir beverages were also produced in similar amounts. Cheese whey and deproteinised cheese whey may therefore serve as substrates for the production of kefir-like beverages similar to milk kefir. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The brown rot fungus Wolfiporia cocos and the selective white rot fungus Perenniporia medulla-panis produce peptides and phenolate-derivative compounds as low molecular weight Fe(3+)-reductants. Phenolates were the major compounds with Fe(3+)-reducing activity in both fungi and displayed Fe(3+)-reducing activity at pH 2.0 and 4.5 in the absence and presence of oxalic acid. The chemical structures of these compounds were identified. Together with Fe(3+) and H(2)O(2) (mediated Fenton reaction) they produced oxygen radicals that oxidized lignocellulosic polysaccharides and lignin extensively in vitro under conditions similar to those found in vivo. These results indicate that, in addition to the extensively studied Gloeophyllum trabeum-a model brown rot fungus-other brown rot fungi as well as selective white rot fungi, possess the means to promote Fenton chemistry to degrade cellulose and hemicellulose, and to modify lignin. Moreover, new information is provided, particularly regarding how lignin is attacked, and either repolymerized or solubilized depending on the type of fungal attack, and suggests a new pathway for selective white rot degradation of wood. The importance of Fenton reactions mediated by phenolates operating separately or synergistically with carbohydrate-degrading enzymes in brown rot fungi, and lignin-modifying enzymes in white rot fungi is discussed. This research improves our understanding of natural processes in carbon cycling in the environment, which may enable the exploration of novel methods for bioconversion of lignocellulose in the production of biofuels or polymers, in addition to the development of new and better ways to protect wood from degradation by microorganisms.
Resumo:
Support for interoperability and interchangeability of software components which are part of a fieldbus automation system relies on the definition of open architectures, most of them involving proprietary technologies. Concurrently, standard, open and non-proprietary technologies, such as XML, SOAP, Web Services and the like, have greatly evolved and been diffused in the computing area. This article presents a FOUNDATION fieldbus (TM) device description technology named Open-EDD, based on XML and other related technologies (XLST, DOM using Xerces implementation, OO, XMIL Schema), proposing an open and nonproprietary alternative to the EDD (Electronic Device Description). This initial proposal includes defining Open-EDDML as the programming language of the technology in the FOUNDATION fieldbus (TM) protocol, implementing a compiler and a parser, and finally, integrating and testing the new technology using field devices and a commercial fieldbus configurator. This study attests that this new technology is feasible and can be applied to other configurators or HMI applications used in fieldbus automation systems. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
This paper presents a controller design method for fuzzy dynamic systems based on piecewise Lyapunov functions with constraints on the closed-loop pole location. The main idea is to use switched controllers to locate the poles of the system to obtain a satisfactory transient response. It is shown that the global fuzzy system satisfies the requirements for the design and that the control law can be obtained by solving a set of linear matrix inequalities, which can be efficiently solved with commercially available softwares. An example is given to illustrate the application of the proposed method. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A secure communication system based on the error-feedback synchronization of the electronic model of the particle-in-a-box system is proposed. This circuit allows a robust and simple electronic emulation of the mechanical behavior of the collisions of a particle inside a box, exhibiting rich chaotic behavior. The required nonlinearity to emulate the box walls is implemented in a simple way when compared with other analog electronic chaotic circuits. A master/slave synchronization of two circuits exhibiting a rich chaotic behavior demonstrates the potentiality of this system to secure communication. In this system, binary data stream information modulates the bifurcation parameter of the particle-in-a-box electronic circuit in the transmitter. In the receiver circuit, this parameter is estimated using Pecora-Carroll synchronization and error-feedback synchronization. The performance of the demodulation process is verified through the eye pattern technique applied on the recovered bit stream. During the demodulation process, the error-feedback synchronization presented better performance compared with the Pecora-Carroll synchronization. The application of the particle-in-a-box electronic circuit in a secure communication system is demonstrated.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work proposes a completely new approach for the design of resonant structures aiming at wavelength-filtering applications. The structure consists of a subwavelength metal-insulator-metal (MIM) waveguide presenting tilted coupled structures transversely arranged in the midpoint between the input and output ports. The cavity-like response of this device has shown that this concept can be particularly attractive for optical filter design for telecom applications. The extra degree of freedom provided by the tilting of the cavity has proved to be not only very effective on improving the quality factor of these structures, but also to be an elegant way of extending the range of applications for tuning multiple wavelengths, if necessary.
Resumo:
The crossflow filtration process differs of the conventional filtration by presenting the circulation flow tangentially to the filtration surface. The conventional mathematical models used to represent the process have some limitations in relation to the identification and generalization of the system behaviour. In this paper, a system based on artificial neural networks is developed to overcome the problems usually found in the conventional mathematical models. More specifically, the developed system uses an artificial neural network that simulates the behaviour of the crossflow filtration process in a robust way. Imprecisions and uncertainties associated with the measurements made on the system are automatically incorporated in the neural approach. Simulation results are presented to justify the validity of the proposed approach. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This work proposes a method based on both preprocessing and data mining with the objective of identify harmonic current sources in residential consumers. In addition, this methodology can also be applied to identify linear and nonlinear loads. It should be emphasized that the entire database was obtained through laboratory essays, i.e., real data were acquired from residential loads. Thus, the residential system created in laboratory was fed by a configurable power source and in its output were placed the loads and the power quality analyzers (all measurements were stored in a microcomputer). So, the data were submitted to pre-processing, which was based on attribute selection techniques in order to minimize the complexity in identifying the loads. A newer database was generated maintaining only the attributes selected, thus, Artificial Neural Networks were trained to realized the identification of loads. In order to validate the methodology proposed, the loads were fed both under ideal conditions (without harmonics), but also by harmonic voltages within limits pre-established. These limits are in accordance with IEEE Std. 519-1992 and PRODIST (procedures to delivery energy employed by Brazilian`s utilities). The results obtained seek to validate the methodology proposed and furnish a method that can serve as alternative to conventional methods.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.