933 resultados para participatory evaluation methodology
Resumo:
The study presents the construction process of research methodology "Training in SUS Humanization: effects evaluation of training processes from institutional supporters on health productionin Rio Grande do Sul, Santa Catarina and São Paulo territories." There was a search for developing an appropriate evaluative practice to the training processes, a methodology that instead of evaluating on something, assessed along with the supporters who attended the training-intervention, a participatory methodology.Therefore, the constitution of the Research Interest Group was an eminent tool. Trained supporters comprised the research team to expand participatory possibilities of a large and dispersed group, producing interferences in the investigative process conduction, described and analyzed in the study.At the same time, their experiences interfered in the understandings they had until then about the intervention-training experiences and effects on their daily lives, after almost four years.Thus, the methodological approach was intrinsically linked to the construction of a subjectivity differentiated plan and necessarily collective, which shifted the position of supporters involved from mere data suppliers to a lateralityposition in relation to other actors.The trial afforded by participatory strategies allowed researchers and supporters to interfere and compose the evaluation scenario with remarkable performances throughout the investigative process.The survey configuration was like a bet on a given methodological architecture that, in seeking to overcome evaluator-evaluated logic produced information for (retro) feedingthe intervention triggered by it. In the formative dimension, it also went through working processes analyzed by supporters rescuing the indissoluble characteristic that health activities mobilize among intervening, training and reviewing.
Resumo:
OBJECTIVE: To compare the strength generated by the rotator muscles of the shoulder joint between the right upper limb and left upper limb among healthy individuals. METHODS: To evaluate the muscle strength of upper limbs from isometric contractions in the horizontal direction (rotation) an isometric dynamometer was used, equipped with transducers, signal conditioning, a data acquisition board, and finally, a computer. Study participants were 22 male military subjects, aged between 18 and 19 years old, body mass between 57.7 and 93.0 kg (71.8 ± 9.45 kg) and height between 1.67 and 1.90 m (1.75 ± 0.06 m), healthy and without clinical diseases or any type of orthopedic injury in the muscle skeletal system. RESULTS: The internal rotation in the right upper limb (RUL) was higher than the average strength of internal rotation in the left upper limb (LUL) (p = 0.723). The external rotation strength in RUL was lower than the average strength of external rotation in the LUL (p=0.788). No statistical difference was observed by comparing the strength values of all isometric strength tests. CONCLUSION: For the sample and methodology used to assess muscle strength, there was no statistical difference between the strength generated by the muscles of the rotator cuff of the right and left upper limbs. Experimental Study.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Real Options Analysis (ROA) has become a complimentary tool for engineering economics. It has become popular due to the limitations of conventional engineering valuation methods; specifically, the assumptions of uncertainty. Industry is seeking to quantify the value of engineering investments with uncertainty. One problem with conventional tools are that they may assume that cash flows are certain, therefore minimizing the possibility of the uncertainty of future values. Real options analysis provides a solution to this problem, but has been used sparingly by practitioners. This paper seeks to provide a new model, referred to as the Beta Distribution Real Options Pricing Model (BDROP), which addresses these limitations and can be easily used by practitioners. The positive attributes of this new model include unconstrained market assumptions, robust representation of the underlying asset‟s uncertainty, and an uncomplicated methodology. This research demonstrates the use of the model to evaluate the use of automation for inventory control.
Resumo:
The electromagnetic interference between electronic systems or between their components influences the overall performance. It is important thus to model these interferences in order to optimize the position of the components of an electronic system. In this paper, a methodology to construct the equivalent model of magnetic field sources is proposed. It is based on the multipole expansion, and it represents the radiated emission of generic structures in a spherical reference frame. Experimental results for different kinds of sources are presented illustrating our method.
Resumo:
This paper discusses the influence of fat type in the structure of ice cream, during its production by means of rheo-optical analysis. Fat plays an important part in the ice cream structure formation. It's responsible for the air stabilization, flavor release, texture and melting properties. The objective of this study was to use a rheological method to predict the fat network formation in ice cream with three types of fats (hydrogenated, low trans and palm fat). The three formulations were produced using the same methodology and ratio of ingredients. Rheo-optical measurements were taken before and after the ageing process, and the maximum compression force, overrun and melting profile were calculated in the finished product. The rheological analysis showed a better response from the ageing process from the hydrogenated fat, followed by the low trans fat. The formulation with palm fat showed greater differences between the three, where through the rheological tests a weaker destabilization of the fat globule membrane by the emulsifier was suggested. The overrun, texture measurements and meltdown profile has shown the distinction on the structure formation by the hydrogenated fat from the other fats.
Resumo:
Duarte MAH, Alves de Aguiar K, Zeferino MA, Vivan RR, Ordinola-Zapata R, Tanomaru-Filho M, Weckwerth PH, Kuga MC. Evaluation of the propylene glycol association on some physical and chemical properties of mineral trioxide aggregate. International Endodontic Journal, 45, 565570, 2012. Abstract Aim To evaluate the influence of propylene glycol (PG) on the flowability, setting time, pH and calcium ion release of mineral trioxide aggregate (MTA). Methodology Mineral trioxide aggregate was mixed with different proportions of PG, as follows: group 1: MTA + 100% distilled water (DW); group 2: MTA + 80% DW and 20% PG; group 3: MTA + 50% DW and 50% PG; group 4: MTA + 20% DW and 80% PG; group 5: MTA + 100% PG. The ANSI/ADA No. 57 was followed for evaluating the flowability and the setting time was measured by using ASTM C266-08. For pH and calcium release analyses, 50 acrylic teeth with root-end cavities were filled with the materials (n = 10) and individually immersed in flasks containing 10 mL deionized water. After 3 h, 24 h, 72 h and 168 h, teeth were placed in new flasks and the water in which each specimen was immersed had its pH determined by a pH metre and the calcium release measured by an atomic absorption spectrophotometer with a calcium-specific hollow cathode lamp. Data were analysed by using one-way anova test for global comparison and by using Tukeys test for individual comparisons. Results The highest value of flowability was observed with MTA + 20% DW and 80% PG and the lowest values were found with MTA + 100% DW. They were significantly different compared to the other groups (P < 0.05). The presence of PG did not affect the pH and calcium release. The MTA + 100% PG favoured the highest (P < 0.05) pH and calcium release after 3 h. Increasing the PG proportion interfered (P < 0.05) with the setting time; when used at the volume of 100% setting did not occur. Conclusion The addition of PG to MTA-Angelus increased its setting time, improved flowability and increased the pH and calcium ion release during the initial post-mixing periods. The ratio of 80% DW 20% PG is recommended.
Resumo:
During the last three decades, several predictive models have been developed to estimate the somatic production of macroinvertebrates. Although the models have been evaluated for their ability to assess the production of macrobenthos in different marine ecosystems, these approaches have not been applied specifically to sandy beach macrofauna and may not be directly applicable to this transitional environment. Hence, in this study, a broad literature review of sandy beach macrofauna production was conducted and estimates obtained with cohort-based and size-based methods were collected. The performance of nine models in estimating the production of individual populations from the sandy beach environment, evaluated for all taxonomic groups combined and for individual groups separately, was assessed, comparing the production predicted by the models to the estimates obtained from the literature (observed production). Most of the models overestimated population production compared to observed production estimates, whether for all populations combined or more specific taxonomic groups. However, estimates by two models developed by Cusson and Bourget provided best fits to measured production, and thus represent the best alternatives to the cohort-based and size-based methods in this habitat. The consistent performance of one of these Cusson and Bourget models, which was developed for the macrobenthos of sandy substrate habitats (C&B-SS), shows that the performance of a model does not depend on whether it was developed for a specific taxonomic group. Moreover, since some widely used models (e.g., the Robertson model) show very different responses when applied to the macrofauna of different marine environments (e.g., sandy beaches and estuaries), prior evaluation of these models is essential.
Resumo:
In this work, the reduction reaction of paraquat herbicide was used to obtain analytical signals using electrochemical techniques of differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry. Analytes were prepared with laboratory purified water and natural water samples (from Mogi-Guacu River, SP). The electrochemical techniques were applied to 1.0 mol L-1 Na2SO4 solutions, at pH 5.5, and containing different concentrations of paraquat, in the range of 1 to 10 mu mol L-1, using a gold ultramicroelectrode. 5 replicate experiments were conducted and in each the mean value for peak currents obtained -0.70 V vs. Ag/AgCl yielded excellent linear relationships with pesticide concentrations. The slope values for the calibration plots (method sensitivity) were 4.06 x 10(-3), 1.07 x 10(-2) and 2.95 x 10(-2) A mol(-1) L for purified water by differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry, respectively. For river water samples, the slope values were 2.60 x 10(-3), 1.06 x 10(-2) and 3.35 x 10(-2) A mol(-1) L, respectively, showing a small interference from the natural matrix components in paraquat determinations. The detection limits for paraquat determinations were calculated by two distinct methodologies, i.e., as proposed by IUPAC and a statistical method. The values obtained with multiple square waves voltammetry were 0.002 and 0.12 mu mol L-1, respectively, for pure water electrolytes. The detection limit from IUPAC recommendations, when inserted in the calibration curve equation, an analytical signal (oxidation current) is smaller than the one experimentally observed for the blank solution under the same experimental conditions. This is inconsistent with the definition of detection limit, thus the IUPAC methodology requires further discussion. The same conclusion can be drawn by the analyses of detection limits obtained with the other techniques studied.
Resumo:
INTRODUCTION: The accurate evaluation of error of measurement (EM) is extremely important as in growth studies as in clinical research, since there are usually quantitatively small changes. In any study it is important to evaluate the EM to validate the results and, consequently, the conclusions. Because of its extreme simplicity, the Dahlberg formula is largely used worldwide, mainly in cephalometric studies. OBJECTIVES: (I) To elucidate the formula proposed by Dahlberg in 1940, evaluating it by comparison with linear regression analysis; (II) To propose a simple methodology to analyze the results, which provides statistical elements to assist researchers in obtaining a consistent evaluation of the EM. METHODS: We applied linear regression analysis, hypothesis tests on its parameters and a formula involving the standard deviation of error of measurement and the measured values. RESULTS AND CONCLUSION: we introduced an error coefficient, which is a proportion related to the scale of observed values. This provides new parameters to facilitate the evaluation of the impact of random errors in the research final results.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.
Resumo:
This thesis presents SEELF (Sustainable EEL fishery) Index, a methodology for evaluation of European eel (Anguilla anguilla) for the implementation of an effective Eel Management Plan, as defined by EU Regulation No.1100/2007. SEELF uses internal and external indices, age and blood parameters, and selects suitable specimen for restocking; it is also a reliable tool for eel stock management. In fact, SEELF Index, was developed in two versions: SEELF A, to be used in field operations (catch&release, eel status monitoring) and SEELF B to be used for quality control (food production) and research (eel status monitoring). Health status was evaluated also by biomarker analysis (ChE), and data were compared with age of eel. Age determination was performed with otolith reading and fish scale reading and a calibration between the two methods was possible. The study area was the Comacchio lagoon, a brackish coastal lagoon in Italy, well known as an example of suitable environment for eel fishery, where the capability to use the local natural resources has long been a key factor for a successful fishery management. Comacchio lagoon is proposed as an area where an effective EMP can be performed, in agreement with the main features (management of basins, reduction of mortality due to predators,etc.) highlighted for designation of European Restocking Area (ERA). The ERA is a new concept, proposed as a pillar of a new strategy on eel management and conservation. Furthermore, the features of ERAs can be useful in the framework of European Scale Eel Management Plan (ESEMP), proposed as a European scale implementation of EMP, providing a more effectiveness of conservation measures for eel management.
Resumo:
The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.
Resumo:
In this study, some important aspects of the relationship between honey bees (Apis mellifera L.) and pesticides have been investigated. In the first part of the research, the effects of the exposure of honey bees to neonicotinoids and fipronil contaminated dusts were analyzed. In fact, considerable amounts of these pesticides, employed for maize seed dressing treatments, may be dispersed during the sowing operations, thus representing a way of intoxication for honey bees. In particular, a specific way of exposure to this pesticides formulation, the indirect contact, was taken into account. To this aim, we conducted different experimentations, in laboratory, in semi-field and in open field conditions in order to assess the effects on mortality, foraging behaviour, colony development and capacity of orientation. The real dispersal of contaminated dusts was previously assessed in specific filed trials. In the second part, the impact of various pesticides (chemical and biological) on honey bee biochemical-physiological changes, was evaluated. Different ways and durations of exposure to the tested products were also employed. Three experimentations were performed, combining Bt spores and deltamethrin, Bt spores and fipronil, difenoconazole and deltamethrin. Several important enzymes (GST, ALP, SOD, CAT, G6PDH, GAPDH) were selected in order to test the pesticides induced variations in their activity. In particular, these enzymes are involved in different pathways of detoxification, oxidative stress defence and energetic metabolism. The results showed a significant effect on mortality of neonicotinoids and fipronil contaminated dusts, both in laboratory and in semi-field trials. However, no effects were evidenced in honey bees orientation capacity. The analysis of different biochemical indicators highlighted some interesting physiological variations that can be linked to the pesticide exposure. We therefore stress the attention on the possibility of using such a methodology as a novel toxicity endpoint in environmental risk assessment.