972 resultados para relative static method
Resumo:
This paper studies static-priority preemptive scheduling on a multiprocessor using partitioned scheduling. We propose a new scheduling algorithm and prove that if the proposed algorithm is used and if less than 50% of the capacity is requested then all deadlines are met. It is known that for every static-priority multiprocessor scheduling algorithm, there is a task set that misses a deadline although the requested capacity is arbitrary close to 50%.
Resumo:
OBJECTIVE To analyze the evolution of catastrophic health expenditure and the inequalities in such expenses, according to the socioeconomic characteristics of Brazilian families.METHODS Data from the National Household Budget 2002-2003 (48,470 households) and 2008-2009 (55,970 households) were analyzed. Catastrophic health expenditure was defined as excess expenditure, considering different methods of calculation: 10.0% and 20.0% of total consumption and 40.0% of the family’s capacity to pay. The National Economic Indicator and schooling were considered as socioeconomic characteristics. Inequality measures utilized were the relative difference between rates, the rates ratio, and concentration index.RESULTS The catastrophic health expenditure varied between 0.7% and 21.0%, depending on the calculation method. The lowest prevalences were noted in relation to the capacity to pay, while the highest, in relation to total consumption. The prevalence of catastrophic health expenditure increased by 25.0% from 2002-2003 to 2008-2009 when the cutoff point of 20.0% relating to the total consumption was considered and by 100% when 40.0% or more of the capacity to pay was applied as the cut-off point. Socioeconomic inequalities in the catastrophic health expenditure in Brazil between 2002-2003 and 2008-2009 increased significantly, becoming 5.20 times higher among the poorest and 4.17 times higher among the least educated.CONCLUSIONS There was an increase in catastrophic health expenditure among Brazilian families, principally among the poorest and those headed by the least-educated individuals, contributing to an increase in social inequality.
Resumo:
OBJECTIVE To propose a method of redistributing ill-defined causes of death (IDCD) based on the investigation of such causes.METHODS In 2010, an evaluation of the results of investigating the causes of death classified as IDCD in accordance with chapter 18 of the International Classification of Diseases (ICD-10) by the Mortality Information System was performed. The redistribution coefficients were calculated according to the proportional distribution of ill-defined causes reclassified after investigation in any chapter of the ICD-10, except for chapter 18, and used to redistribute the ill-defined causes not investigated and remaining by sex and age. The IDCD redistribution coefficient was compared with two usual methods of redistribution: a) Total redistribution coefficient, based on the proportional distribution of all the defined causes originally notified and b) Non-external redistribution coefficient, similar to the previous, but excluding external causes.RESULTS Of the 97,314 deaths by ill-defined causes reported in 2010, 30.3% were investigated, and 65.5% of those were reclassified as defined causes after the investigation. Endocrine diseases, mental disorders, and maternal causes had a higher representation among the reclassified ill-defined causes, contrary to infectious diseases, neoplasms, and genitourinary diseases, with higher proportions among the defined causes reported. External causes represented 9.3% of the ill-defined causes reclassified. The correction of mortality rates by the total redistribution coefficient and non-external redistribution coefficient increased the magnitude of the rates by a relatively similar factor for most causes, contrary to the IDCD redistribution coefficient that corrected the different causes of death with differentiated weights.CONCLUSIONS The proportional distribution of causes among the ill-defined causes reclassified after investigation was not similar to the original distribution of defined causes. Therefore, the redistribution of the remaining ill-defined causes based on the investigation allows for more appropriate estimates of the mortality risk due to specific causes.
Resumo:
A simple procedure to measure the cohesive laws of bonded joints under mode I loading using the double cantilever beam test is proposed. The method only requires recording the applied load–displacement data and measuring the crack opening displacement at its tip in the course of the experimental test. The strain energy release rate is obtained by a procedure involving the Timoshenko beam theory, the specimen’s compliance and the crack equivalent concept. Following the proposed approach the influence of the fracture process zone is taken into account which is fundamental for an accurate estimation of the failure process details. The cohesive law is obtained by differentiation of the strain energy release rate as a function of the crack opening displacement. The model was validated numerically considering three representative cohesive laws. Numerical simulations using finite element analysis including cohesive zone modeling were performed. The good agreement between the inputted and resulting laws for all the cases considered validates the model. An experimental confirmation was also performed by comparing the numerical and experimental load–displacement curves. The numerical load–displacement curves were obtained by adjusting typical cohesive laws to the ones measured experimentally following the proposed approach and using finite element analysis including cohesive zone modeling. Once again, good agreement was obtained in the comparisons thus demonstrating the good performance of the proposed methodology.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.
Resumo:
Joining of components with structural adhesives is currently one of the most widespread techniques for advanced structures (e.g., aerospace or aeronautical). Adhesive bonding does not involve drilling operations and it distributes the load over a larger area than mechanical joints. However, peak stresses tend to develop near the overlap edges because of differential straining of the adherends and load asymmetry. As a result, premature failures can be expected, especially for brittle adhesives. Moreover, bonded joints are very sensitive to the surface treatment of the material, service temperature, humidity and ageing. To surpass these limitations, the combination of adhesive bonding with spot-welding is a choice to be considered, adding a few advantages like superior static strength and stiffness, higher peeling and fatigue strength and easier fabrication, as fixtures during the adhesive curing are not needed. The experimental and numerical study presented here evaluates hybrid spot-welded/bonded single-lap joints in comparison with the purely spot-welded and bonded equivalents. A parametric study on the overlap length (LO) allowed achieving different strength advantages, up to 58% compared to spot-welded joints and 24% over bonded joints. The Finite Element Method (FEM) and Cohesive Zone Models (CZM) for damage growth were also tested in Abaqus® to evaluate this technique for strength prediction, showing accurate estimations for all kinds of joints.
Resumo:
Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Adhesive-bonding for the unions in multi-component structures is gaining momentum over welding, riveting and fastening. It is vital for the design of bonded structures the availability of accurate damage models, to minimize design costs and time to market. Cohesive Zone Models (CZM’s) have been used for fracture prediction in structures. The eXtended Finite Element Method (XFEM) is a recent improvement of the Finite Element Method (FEM) that relies on traction-separation laws similar to those of CZM’s but it allows the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom. This work proposes and validates a damage law to model crack propagation in a thin layer of a structural epoxy adhesive using the XFEM. The fracture toughness in pure mode I (GIc) and tensile cohesive strength (sn0) were defined by Double-Cantilever Beam (DCB) and bulk tensile tests, respectively, which permitted to build the damage law. The XFEM simulations of the DCB tests accurately matched the experimental load-displacement (P-d) curves, which validated the analysis procedure.
Resumo:
This paper presents the Pseudo phase plane (PPP) method for detecting the existence of a nanofilm on the nitroazobenzene-modified glassy carbon electrode (NAB-GC) system. This modified electrode systems and nitroazobenze-nanofilm were prepared by the electrochemical reduction of diazonium salt of NAB at the glassy carbon electrodes (GCE) in nonaqueous media. The IR spectra of the bare glassy carbon electrodes (GCE), the NAB-GC electrode system and the organic NAB film were recorded. The IR data of the bare GC, NAB-GC and NAB film were categorized into five series consisting of FILM1, GC-NAB1, GC1; FILM2, GC-NAB2, GC2; FILM3, GC-NAB3, GC3 and FILM4, GC-NAB4, GC4 respectively. The PPP approach was applied to each group of the data of unmodified and modified electrode systems with nanofilm. The results provided by PPP method show the existence of the NAB film on the modified GC electrode.
Resumo:
Mestrado em Fisioterapia
Resumo:
In this work, the shear modulus and strength of the acrylic adhesive 3M® DP 8005 was evaluated by two different methods: the Thick Adherend Shear Test (TAST) and the Notched Plate Shear Method (Arcan). However, TAST standards advise the use of a special extensometer attached to the specimen, which requires a very experienced technician. In the present study, the adhesive shear displacement for the TAST was measured using an optical technique, and also with a conventional inductive extensometer of 25 mm used for tensile tests. This allowed for an assessment of suitability of using a conventional extensometer to measure this parameter. Since the results obtained by the two techniques are identical, it can be concluded that using a conventional extensometer is a valid option to obtain the shear modulus for the particular adhesive used. In the Arcan tests, the adhesive shear displacement was only measured using the optical technique. This work also aimed the comparison of shear modulus and strength obtained by the TAST and Arcan test methods.
Resumo:
Dissertação apresentada para a obtenção do grau de Doutor em Engenharia Química, especialidade Engenharia da Reacção Química, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Radial basis functions are being used in different scientific areas in order to reproduce the geometrical modeling of an object/structure, as well as to predict its behavior. Due to its characteristics, these functions are well suited for meshfree modeling of physical quantities, which for instances can be associated to the data sets of 3D laser scanning point clouds. In the present work the geometry of a structure is modeled by using multiquadric radial basis functions, and its configuration is further optimized in order to obtain better performances concerning to its static and dynamic behavior. For this purpose the authors consider the particle swarm optimization technique. A set of case studies is presented to illustrate the adequacy of the meshfree model used, as well as its link to particle swarm optimization technique. © 2014 IEEE.
Resumo:
It is important to understand and forecast a typical or a particularly household daily consumption in order to design and size suitable renewable energy systems and energy storage. In this research for Short Term Load Forecasting (STLF) it has been used Artificial Neural Networks (ANN) and, despite the consumption unpredictability, it has been shown the possibility to forecast the electricity consumption of a household with certainty. The ANNs are recognized to be a potential methodology for modeling hourly and daily energy consumption and load forecasting. Input variables such as apartment area, numbers of occupants, electrical appliance consumption and Boolean inputs as hourly meter system were considered. Furthermore, the investigation carried out aims to define an ANN architecture and a training algorithm in order to achieve a robust model to be used in forecasting energy consumption in a typical household. It was observed that a feed-forward ANN and the Levenberg-Marquardt algorithm provided a good performance. For this research it was used a database with consumption records, logged in 93 real households, in Lisbon, Portugal, between February 2000 and July 2001, including both weekdays and weekend. The results show that the ANN approach provides a reliable model for forecasting household electric energy consumption and load profile. © 2014 The Author.