879 resultados para Socially Responsible Investing
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
In this paper, a comparative analysis of the long-term electric power forecasting methodologies used in some South American countries, is presented. The purpose of this study is to compare and observe if such methodologies have some similarities, and also examine the behavior of the results when they are applied to the Brazilian electric market. The abovementioned power forecasts were performed regarding the main four consumption classes (residential, industrial, commercial and rural) which are responsible for approximately 90% of the national consumption. The tool used in this analysis was the SAS (c) program. The outcome of this study allowed identifying various methodological similarities, mainly those related to the econometric variables used by these methods. This fact strongly conditioned the comparative results obtained.
Resumo:
A broader characterization of industrial wastewaters, especially in respect to hazardous compounds and their potential toxicity, is often necessary in order to determine the best practical treatment (or pretreatment) technology available to reduce the discharge of harmful pollutants to the environment or publicly owned treatment works. Using a toxicity-directed approach, this paper sets the base for a rational treatability study of polyester resin manufacturing. Relevant physical and chemical characteristics were determined. Respirometry was used for toxicity reduction evaluation after physical and chemical effluent fractionation. Of all the procedures investigated, only air stripping was significantly effective in reducing wastewater toxicity. Air stripping in pH 7 reduced toxicity in 18.2%, while in pH 11 a toxicity reduction of 62.5% was observed. Results indicated that toxicants responsible for the most significant fraction of the effluent`s instantaneous toxic effect to unadapted activated sludge were organic compounds poorly or not volatilized in acid conditions. These results led to useful directions for conducting treatability studies which will be grounded on actual effluent properties rather than empirical or based on the rare specific data on this kind of industrial wastewater. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper reports a research that evaluated the product development methodologies used in Brazilian small and medium-sized metal-mechanic enterprises (SMEs), in a specific region of Sao Paulo. The tool used for collecting the data was a questionnaire, which was developed and applied through interviews conducted by the researchers in 32 companies. The main focus of this paper can be condensed in the synthesis-question ""Is only the company responsible for the development?"" which was analyzed thoroughly. The results obtained from this analysis were evaluated directly (through the respective percentages of answers) and statistically (through the search of an index which demonstrates if two questions are related). The results point to a degree of maturity in SMEs, which allows product development to be conducted in cooperation networks. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
How does knowledge management (KM) by a government agency responsible for environmental impact assessment (EIA) potentially contribute to better environmental assessment and management practice? Staff members at government agencies in charge of the EIA process are knowledge workers who perform judgement-oriented tasks highly reliant on individual expertise, but also grounded on the agency`s knowledge accumulated over the years. Part of an agency`s knowledge can be codified and stored in an organizational memory, but is subject to decay or loss if not properly managed. The EIA agency operating in Western Australia was used as a case study. Its KM initiatives were reviewed, knowledge repositories were identified and staff surveyed to gauge the utilisation and effectiveness of such repositories in enabling them to perform EIA tasks. Key elements of KM are the preparation of substantive guidance and spatial information management. It was found that treatment of cumulative impacts on the environment is very limited and information derived from project follow-up is not properly captured and stored, thus not used to create new knowledge and to improve practice and effectiveness. Other opportunities for improving organizational learning include the use of after-action reviews. The learning about knowledge management in EIA practice gained from Western Australian experience should be of value to agencies worldwide seeking to understand where best to direct their resources for their own knowledge repositories and environmental management practice. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.
Resumo:
The common practice of reconciliation is based on definition of the mine call factor (MCF) and its application to resource or grade control estimates. The MCF expresses the difference, a ratio or percentage, between the predicted grade and the grade reported by the plant. Therefore, its application allows to correct future estimates. This practice is named reactive reconciliation. However the use of generic factors that are applied across differing time scales and material types often disguises the causes of the error responsible for the discrepancy. The root causes of any given variance can only be identified by analyzing the information behind any variance and, then, making changes to methodologies and processes. This practice is named prognostication, or proactive reconciliation, an iterative process resulting in constant recalibration of the inputs and the calculations. The prognostication allows personnel to adjust processes so that results align within acceptable tolerance ranges, and not only to correct model estimates. This study analyses the reconciliation practices performed at a gold mine in Brazil and suggests a new sampling protocol, based on prognostication concepts.
Resumo:
Nanomaterials have triggered excitement in both fundamental science and technological applications in several fields However, the same characteristic high interface area that is responsible for their unique properties causes unconventional instability, often leading to local collapsing during application Thermodynamically, this can be attributed to an increased contribution of the interface to the free energy, activating phenomena such as sintering and grain growth The lack of reliable interface energy data has restricted the development of conceptual models to allow the control of nanoparticle stability on a thermodynamic basis. Here we introduce a novel and accessible methodology to measure interface energy of nanoparticles exploiting the heat released during sintering to establish a quantitative relation between the solid solid and solid vapor interface energies. We exploited this method in MgO and ZnO nanoparticles and determined that the ratio between the solid solid and solid vapor interface energy is 11 for MgO and 0.7 for ZnO. We then discuss that this ratio is responsible for a thermodynamic metastable state that may prevent collapsing of nanoparticles and, therefore, may be used as a tool to design long-term stable nanoparticles.
Resumo:
The electrochemical behaviour of a near-beta Ti-13Nb-13Zr alloy for the application as implants was investigated in various solutions. The electrolytes used were 0.9 wt% NaCl solution, Hanks` solution and a culture medium known as minimum essential medium (MEM) composed of salts, vitamins and amino acids, all at 37 degrees C. The electrochemical behaviour was investigated by the following electrochemical techniques: open circuit potential measurements as a function of time, electrochemical impedance spectroscopy (EIS) and determination of polarisation curves. The obtained results showed that the Ti alloy was passive in all electrolytes. The EIS results were analysed using an equivalent electrical circuit representing a duplex structure oxide layer, composed of an inner barrier layer, mainly responsible for the alloy corrosion resistance, and an outer and porous layer that has been associated to osteointegration ability. The properties of both layers were dependent on the electrolyte used. The results suggested that the thickest porous layer is formed in the MEM solution whereas the impedance of the barrier layer formed in this solution was the lowest among the electrolytes used. The polarisation curves showed a current increase at potentials around 1300 mV versus saturated calomel electrode (SCE), and this increase was also dependent on the electrolyte used. The highest increase in current density was also associated to the MEM solution suggesting that this is the most aggressive electrolyte to the Ti alloy among the three tested solutions.
Resumo:
Penicillium chrysogenum is widely used as an industrial antibiotic producer, in particular in the synthesis of g-lactam antibiotics such as penicillins and cephalosporins. In industrial processes, oxalic acid formation leads to reduced product yields. Moreover, precipitation of calcium oxalate complicates product recovery. We observed oxalate production in glucose-limited chemostat cultures of P. chrysogenum grown with or without addition of adipic acid, side-chain of the cephalosporin precursor adipoyl-6-aminopenicillinic acid (ad-6-APA). Oxalate accounted for up to 5% of the consumed carbon source. In filamentous fungi, oxaloacetate hydrolase (OAH; EC3.7.1.1) is generally responsible for oxalate production. The P. chrysogenum genome harbours four orthologs of the A. niger oahA gene. Chemostat-based transcriptome analyses revealed a significant correlation between extracellular oxalate titers and expression level of the genes Pc18g05100 and Pc22g24830. To assess their possible involvement in oxalate production, both genes were cloned in Saccharomyces cerevisiae, yeast that does not produce oxalate. Only the expression of Pc22g24830 led to production of oxalic acid in S. cerevisiae. Subsequent deletion of Pc22g28430 in P. chrysogenum led to complete elimination of oxalate production, whilst improving yields of the cephalosporin precursor ad-6-APA. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
The photodegradation of the herbicide clomazone in the presence of S(2)O(8)(2-) or of humic substances of different origin was investigated. A value of (9.4 +/- 0.4) x 10(8) m(-1) s(-1) was measured for the bimolecular rate constant for the reaction of sulfate radicals with clomazone in flash-photolysis experiments. Steady state photolysis of peroxydisulfate, leading to the formation of the sulfate radicals, in the presence of clomazone was shown to be an efficient photodegradation method of the herbicide. This is a relevant result regarding the in situ chemical oxidation procedures involving peroxydisulfate as the oxidant. The main reaction products are 2-chlorobenzylalcohol and 2-chlorobenzaldehyde. The degradation kinetics of clomazone was also studied under steady state conditions induced by photolysis of Aldrich humic acid or a vermicompost extract (VCE). The results indicate that singlet oxygen is the main species responsible for clomazone degradation. The quantum yield of O(2)(a(1)Delta(g)) generation (lambda = 400 nm) for the VCE in D(2)O, Phi(Delta) = (1.3 +/- 0.1) x 10(-3), was determined by measuring the O(2)(a(1)Delta(g)) phosphorescence at 1270 nm. The value of the overall quenching constant of O(2)(a(1)Delta(g)) by clomazone was found to be (5.7 +/- 0.3) x 10(7) m(-1) s(-1) in D(2)O. The bimolecular rate constant for the reaction of clomazone with singlet oxygen was k(r) = (5.4 +/- 0.1) x 10(7) m(-1) s(-1), which means that the quenching process is mainly reactive.
Resumo:
Controlling the surface properties of nanoparticles using ionic dopants prone to be surface segregated has emerged as an interesting tool for obtaining highly selective and sensitive sensors. In this work, the surface segregation of Cd cations on SnO2 nanopowders prepared by the Pechini`s method was studied by infrared spectroscopy, X-ray diffraction, and specific surface area analysis. We observed that the surface chemistry modifications caused by the surface segregation of Cd and the large specific surface area were closely responsible for a rapid and regular electrical response of 5 mol% Cd-doped SnO2 films to 100 ppm propane and NO, diluted in dry air at relatively low temperature (100 degrees C). (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The increasing adoption of information systems in healthcare has led to a scenario where patient information security is more and more being regarded as a critical issue. Allowing patient information to be in jeopardy may lead to irreparable damage, physically, morally, and socially to the patient, potentially shaking the credibility of the healthcare institution. Medical images play a crucial role in such context, given their importance in diagnosis, treatment, and research. Therefore, it is vital to take measures in order to prevent tampering and determine their provenance. This demands adoption of security mechanisms to assure information integrity and authenticity. There are a number of works done in this field, based on two major approaches: use of metadata and use of watermarking. However, there still are limitations for both approaches that must be properly addressed. This paper presents a new method using cryptographic means to improve trustworthiness of medical images, providing a stronger link between the image and the information on its integrity and authenticity, without compromising image quality to the end user. Use of Digital Imaging and Communications in Medicine structures is also an advantage for ease of development and deployment.
Resumo:
Transmission and switching in digital telecommunication networks require distribution of precise time signals among the nodes. Commercial systems usually adopt a master-slave (MS) clock distribution strategy building slave nodes with phase-locked loop (PLL) circuits. PLLs are responsible for synchronizing their local oscillations with signals from master nodes, providing reliable clocks in all nodes. The dynamics of a PLL is described by an ordinary nonlinear differential equation, with order one plus the order of its internal linear low-pass filter. Second-order loops are commonly used because their synchronous state is asymptotically stable and the lock-in range and design parameters are expressed by a linear equivalent system [Gardner FM. Phaselock techniques. New York: John Wiley & Sons: 1979]. In spite of being simple and robust, second-order PLLs frequently present double-frequency terms in PD output and it is very difficult to adapt a first-order filter in order to cut off these components [Piqueira JRC, Monteiro LHA. Considering second-harmonic terms in the operation of the phase detector for second order phase-locked loop. IEEE Trans Circuits Syst [2003;50(6):805-9; Piqueira JRC, Monteiro LHA. All-pole phase-locked loops: calculating lock-in range by using Evan`s root-locus. Int J Control 2006;79(7):822-9]. Consequently, higher-order filters are used, resulting in nonlinear loops with order greater than 2. Such systems, due to high order and nonlinear terms, depending on parameters combinations, can present some undesirable behaviors, resulting from bifurcations, as error oscillation and chaos, decreasing synchronization ranges. In this work, we consider a second-order Sallen-Key loop filter [van Valkenburg ME. Analog filter design. New York: Holt, Rinehart & Winston; 1982] implying a third order PLL The resulting lock-in range of the third-order PLL is determined by two bifurcation conditions: a saddle-node and a Hopf. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Due to the broadband characteristic of chaotic signals, many of the methods that have been proposed for synchronizing chaotic systems do not usually present a satisfactory performance when applied to bandlimited communication channels. Here, the effects of bandwidth limitations imposed by the channel on the synchronous solution of a discrete-time chaotic master-slave network are investigated. The discrete-time system considered in this study is the Henon map. It is analytically shown that synchronism can be achieved in such a network by introducing a digital filter in the feedback loop responsible for generating the chaotic signal that will be sent to the slave node. Numerical simulations relating the filter parameters, such as its order and cut-off frequency, to the maximum Lyapunov exponent of the master node, which determines if the transmitted signal is chaotic or not, are also presented. These results can be useful for practical communication schemes based on chaos.