998 resultados para Scenario method
Resumo:
Bread is consumed worldwide by man, thus contributing to the regular ingestion of certain inorganic species such as chloride. It controls the blood pressure if associated to a sodium intake and may increase the incidence of stomach ulcer. Its routine control should thus be established by means of quick and low cost procedures. This work reports a double- channel flow injection analysis (FIA) system with a new chloride sensor for the analysis of bread. All solutions are prepared in water and necessary ionic strength adjustments are made on-line. The body of the indicating electrode is made from a silver needle of 0.8 mm i.d. with an external layer of silver chloride. These devices were constructed with different lengths. Electrodes of 1.0 to 3.0 cm presented better analytical performance. The calibration curves under optimum conditions displayed Nernstian behaviour, with average slopes of 56 mV decade-1, with sampling rates of 60 samples h-1. The method was applied to analyze several kinds of bread, namely pão de trigo, pão integral, pão de centeio, pão de mistura, broa de milho, pão sem sal, pão meio sal, pão-de-leite, and pão de água. The accuracy and precision of the potentiometric method were ascertained by comparison to a spectrophotometric method of continuous segmented flow. These methods were validated against ion-chromatography procedures.
Resumo:
Paracetamol is among the most worldwide consumed pharmaceuticals. Although its occurrence in the environment is well documented, data about the presence of its metabolites and transformation products is very scarce. The present work describes the development of an analytical method for the simultaneous determination of paracetamol, its principal metabolite (paracetamol-glucuronide) and its main transformation product (p-aminophenol) based on solid phase extraction (SPE) and high performance liquid chromatography coupled to diode array detection (HPLC-DAD). The method was applied to analysis of river waters, showing to be suitable to be used in routine analysis. Different SPE sorbents were compared and the use of two Oasis WAX cartridges in tandem proved to be the most adequate approach for sample clean up and pre-concentration. Under optimized conditions, limits of detection in the range 40–67 ng/L were obtained, as well as mean recoveries between 60 and 110% with relative standard deviations (RSD) below 6%. Finally, the developed SPE-HPLC/DAD method was successfully applied to the analysis of the selected compounds in samples from seven rivers located in the north of Portugal. Nevertheless all the compounds were detected, it was the first time that paracetamol-glucuronide was found in river water at concentrations up to 3.57 μg/L.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.
Resumo:
Component joining is typically performed by welding, fastening, or adhesive-bonding. For bonded aerospace applications, adhesives must withstand high-temperatures (200°C or above, depending on the application), which implies their mechanical characterization under identical conditions. The extended finite element method (XFEM) is an enhancement of the finite element method (FEM) that can be used for the strength prediction of bonded structures. This work proposes and validates damage laws for a thin layer of an epoxy adhesive at room temperature (RT), 100, 150, and 200°C using the XFEM. The fracture toughness (G Ic ) and maximum load ( ); in pure tensile loading were defined by testing double-cantilever beam (DCB) and bulk tensile specimens, respectively, which permitted building the damage laws for each temperature. The bulk test results revealed that decreased gradually with the temperature. On the other hand, the value of G Ic of the adhesive, extracted from the DCB data, was shown to be relatively insensitive to temperature up to the glass transition temperature (T g ), while above T g (at 200°C) a great reduction took place. The output of the DCB numerical simulations for the various temperatures showed a good agreement with the experimental results, which validated the obtained data for strength prediction of bonded joints in tension. By the obtained results, the XFEM proved to be an alternative for the accurate strength prediction of bonded structures.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
This paper focus on a demand response model analysis in a smart grid context considering a contingency scenario. A fuzzy clustering technique is applied on the developed demand response model and an analysis is performed for the contingency scenario. Model considerations and architecture are described. The demand response developed model aims to support consumers decisions regarding their consumption needs and possible economic benefits.
Resumo:
Global warming and the associated climate changes are being the subject of intensive research due to their major impact on social, economic and health aspects of the human life. Surface temperature time-series characterise Earth as a slow dynamics spatiotemporal system, evidencing long memory behaviour, typical of fractional order systems. Such phenomena are difficult to model and analyse, demanding for alternative approaches. This paper studies the complex correlations between global temperature time-series using the Multidimensional scaling (MDS) approach. MDS provides a graphical representation of the pattern of climatic similarities between regions around the globe. The similarities are quantified through two mathematical indices that correlate the monthly average temperatures observed in meteorological stations, over a given period of time. Furthermore, time dynamics is analysed by performing the MDS analysis over slices sampling the time series. MDS generates maps describing the stations’ locus in the perspective that, if they are perceived to be similar to each other, then they are placed on the map forming clusters. We show that MDS provides an intuitive and useful visual representation of the complex relationships that are present among temperature time-series, which are not perceived on traditional geographic maps. Moreover, MDS avoids sensitivity to the irregular distribution density of the meteorological stations.
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Civil
Resumo:
OBJECTIVE To propose a method of redistributing ill-defined causes of death (IDCD) based on the investigation of such causes.METHODS In 2010, an evaluation of the results of investigating the causes of death classified as IDCD in accordance with chapter 18 of the International Classification of Diseases (ICD-10) by the Mortality Information System was performed. The redistribution coefficients were calculated according to the proportional distribution of ill-defined causes reclassified after investigation in any chapter of the ICD-10, except for chapter 18, and used to redistribute the ill-defined causes not investigated and remaining by sex and age. The IDCD redistribution coefficient was compared with two usual methods of redistribution: a) Total redistribution coefficient, based on the proportional distribution of all the defined causes originally notified and b) Non-external redistribution coefficient, similar to the previous, but excluding external causes.RESULTS Of the 97,314 deaths by ill-defined causes reported in 2010, 30.3% were investigated, and 65.5% of those were reclassified as defined causes after the investigation. Endocrine diseases, mental disorders, and maternal causes had a higher representation among the reclassified ill-defined causes, contrary to infectious diseases, neoplasms, and genitourinary diseases, with higher proportions among the defined causes reported. External causes represented 9.3% of the ill-defined causes reclassified. The correction of mortality rates by the total redistribution coefficient and non-external redistribution coefficient increased the magnitude of the rates by a relatively similar factor for most causes, contrary to the IDCD redistribution coefficient that corrected the different causes of death with differentiated weights.CONCLUSIONS The proportional distribution of causes among the ill-defined causes reclassified after investigation was not similar to the original distribution of defined causes. Therefore, the redistribution of the remaining ill-defined causes based on the investigation allows for more appropriate estimates of the mortality risk due to specific causes.
Resumo:
A simple procedure to measure the cohesive laws of bonded joints under mode I loading using the double cantilever beam test is proposed. The method only requires recording the applied load–displacement data and measuring the crack opening displacement at its tip in the course of the experimental test. The strain energy release rate is obtained by a procedure involving the Timoshenko beam theory, the specimen’s compliance and the crack equivalent concept. Following the proposed approach the influence of the fracture process zone is taken into account which is fundamental for an accurate estimation of the failure process details. The cohesive law is obtained by differentiation of the strain energy release rate as a function of the crack opening displacement. The model was validated numerically considering three representative cohesive laws. Numerical simulations using finite element analysis including cohesive zone modeling were performed. The good agreement between the inputted and resulting laws for all the cases considered validates the model. An experimental confirmation was also performed by comparing the numerical and experimental load–displacement curves. The numerical load–displacement curves were obtained by adjusting typical cohesive laws to the ones measured experimentally following the proposed approach and using finite element analysis including cohesive zone modeling. Once again, good agreement was obtained in the comparisons thus demonstrating the good performance of the proposed methodology.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.
Resumo:
Adhesive-bonding for the unions in multi-component structures is gaining momentum over welding, riveting and fastening. It is vital for the design of bonded structures the availability of accurate damage models, to minimize design costs and time to market. Cohesive Zone Models (CZM’s) have been used for fracture prediction in structures. The eXtended Finite Element Method (XFEM) is a recent improvement of the Finite Element Method (FEM) that relies on traction-separation laws similar to those of CZM’s but it allows the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom. This work proposes and validates a damage law to model crack propagation in a thin layer of a structural epoxy adhesive using the XFEM. The fracture toughness in pure mode I (GIc) and tensile cohesive strength (sn0) were defined by Double-Cantilever Beam (DCB) and bulk tensile tests, respectively, which permitted to build the damage law. The XFEM simulations of the DCB tests accurately matched the experimental load-displacement (P-d) curves, which validated the analysis procedure.