820 resultados para Performance Based Assessment
Resumo:
Työn tavoite onharmonisoida yhtenäiset rakenteet UPM:n paperi- ja sellutehtaiden merkittävilleympäristönäkökohdille sekä niiden ympäristöriskienhallintajärjestelmille. Näin saavutetaan yhteneväiset tavoitteet ja analysointikeinot yrityksen yksiköille. Harmonisointiprosessi on osa koko yrityksen ympäristöhallintajärjestelmän kehittämistä. Ja konsernin EMS -prosessi puolestaan konvergoi konsernin integroidun johtamisjärjestelmän kehitystä. Lisäksi työn tapaustutkimuksessa selvitettiin riskienhallintajärjestelmien integroitumispotentiaalia. Sen avulla saavutettaisiin paremmin suuren yrityksen synergia-etuja ja vuorovaikutteisuutta toimijoiden kesken, sekä parannettaisiin riskienhallintajärjestelmän mukautuvuutta ja käytettävyyttä. Työssä käsitellään kolmea esimerkkiä, joiden pohjalta tehdään esitys harmonisoiduille merkittäville ympäristönäkökohdille sekä riskienhallintajärjestelmien parametreille. Tutkimusongelmaa lähestytään haastattelujen, kirjallisuuden, yrityksen PWC:llä teettämän selvityksen sekä omien päätelmien avulla. Lisäksi työssä esitetään ympäristöhallintajärjestelmän tehokkuuden todentaminen ympäristösuorituskyvyn muuttujiin suhteutettuna. Pohjana jatkuvan kehityksen päämäärälle on organisaatio-oppiminen, niin yksittäisen työntekijän, tiimien kuin eri yksiköiden kesken. Se antaa sysäyksen aineettoman omaisuuden, kuten ympäristö-osaamisen, hyödyntämiseen parhaalla mahdollisella tavalla. Tärkeimpinä lopputuloksina työssä ovat ehdotukset harmonisoiduille merkittäville ympäristönäkökohdille sekä ympäristöriskienhallintajärjestelmän määritetyille komponenteille. Niitä ovat määritelmät ja skaalat riskien todennäköisyydelle, seurauksille sekä riskiluokille. Työn viimeisenä osana luodaan pohja tapaustutkimuksen avulla Rauman tehtaan jätevedenpuhdistamon kahden erilaisen riskienhallintajärjestelmän integroitumiselle.
Resumo:
This paper presents a study of correlations between the performance of trainee translators, according to their teacher’s assessment, and the quality of their self-evaluation, according to their answers to metacognitive questionnaires. Two case-studies of two consecutive editions of a course in general translation from German into Spanish are dealt with. The course involved the use of post-translation metacognitive questionnaires designed to help trainees to evaluate their translating. A selection of the questionnaires (from the strongest and the weakest performances by students for each course edition) is considered. The study focuses on one item in these questionnaires that has to do with identifying translation problems and justifying their solutions. An interpretive analysis of the trainees’ answers for this questionnaire item reveals that the best-performing students were more strategically and translationally aware in self-evaluating their own translating. Our conclusions are based on considering six parameters from the analysis of the trainees’ answers, which are tentatively regarded as indicative of the quality of their self-evaluation.
Resumo:
Homology modeling is the most commonly used technique to build a three-dimensional model for a protein sequence. It heavily relies on the quality of the sequence alignment between the protein to model and related proteins with a known three dimensional structure. Alignment quality can be assessed according to the physico-chemical properties of the three dimensional models it produces.In this work, we introduce fifteen predictors designed to evaluate the properties of the models obtained for various alignments. They consist of an energy value obtained from different force fields (CHARMM, ProsaII or ANOLEA) computed on residue selected around misaligned regions. These predictors were evaluated on ten challenging test cases. For each target, all possible ungapped alignments are generated and their corresponding models are computed and evaluated.The best predictor, retrieving the structural alignment for 9 out of 10 test cases, is based on the ANOLEA atomistic mean force potential and takes into account residues around misaligned secondary structure elements. The performance of the other predictors is significantly lower. This work shows that substantial improvement in local alignments can be obtained by careful assessment of the local structure of the resulting models.
Resumo:
One aim of this study is to determine the impact of water velocity on the uptake of indicator polychlorinated biphenyls (iPCBs) by silicone rubber (SR) and low-density polyethylene (LDPE) passive samplers. A second aim is to assess the efficiency of performance reference compounds (PRCs) to correct for the impact of water velocity. SR and LDPE samplers were spiked with 11 or 12 PRCs and exposed for 6 weeks to four different velocities (in the range of 1.6 to 37.7 cm s−1) in river-like flow conditions using a channel system supplied with river water. A relationship between velocity and the uptakewas found for each iPCB and enables to determine expected changes in the uptake due to velocity variations. For both samplers, velocity increases from 2 to 10 cm s−1, 30 cm s−1 (interpolated data) and 100 cm s−1 (extrapolated data) lead to increases of the uptake which do not exceed a factor of 2, 3 and 4.5, respectively. Results also showed that the influence of velocity decreased with increasing the octanol-water coefficient partition (log Kow) of iPCBs when SR is used whereas the opposite effect was observed for LDPE. Time-weighted average (TWA) concentrations of iPCBs in water were calculated from iPCB uptake and PRC release. These calculations were performed using either a single PRC or all the PRCs. The efficiency of PRCs to correct the impact of velocity was assessed by comparing the TWA concentrations obtained at the four tested velocities. For SR, a good agreement was found among the four TWA concentrations with both methods (average RSD b 10%). Also for LDPE, PRCs offered a good correction of the impact of water velocity (average RSD of about 10 to 20%). These results contribute to the process of acceptance of passive sampling in routine regulatory monitoring programs.
Resumo:
Vertebral fracture assessments (VFAs) using dual-energy X-ray absorptiometry increase vertebral fracture detection in clinical practice and are highly reproducible. Measures of reproducibility are dependent on the frequency and distribution of the event. The aim of this study was to compare 2 reproducibility measures, reliability and agreement, in VFA readings in both a population-based and a clinical cohort. We measured agreement and reliability by uniform kappa and Cohen's kappa for vertebral reading and fracture identification: 360 VFAs from a population-based cohort and 85 from a clinical cohort. In the population-based cohort, 12% of vertebrae were unreadable. Vertebral fracture prevalence ranged from 3% to 4%. Inter-reader and intrareader reliability with Cohen's kappa was fair to good (0.35-0.71 and 0.36-0.74, respectively), with good inter-reader and intrareader agreement by uniform kappa (0.74-0.98 and 0.76-0.99, respectively). In the clinical cohort, 15% of vertebrae were unreadable, and vertebral fracture prevalence ranged from 7.6% to 8.1%. Inter-reader reliability was moderate to good (0.43-0.71), and the agreement was good (0.68-0.91). In clinical situations, the levels of reproducibility measured by the 2 kappa statistics are concordant, so that either could be used to measure agreement and reliability. However, if events are rare, as in a population-based cohort, we recommend evaluating reproducibility using the uniform kappa, as Cohen's kappa may be less accurate.
Resumo:
Suorituskyky- ja kuormitustestien tekeminen sovelluksille on erittäin tärkeä osa tuotantoprosessia nykypäivänä. Myös Web-sovelluksia testataan yhä enemmän. Tarve suorituskyky- ja kuormitustestien tekemiselle on selvä. Testattavan ympäristön tämänhetkinen, mutta myös tulevaisuuden toimivuus taataan oikein tehdyillä testeillä ja niitä seuraavilla korjaustoimenpiteillä. Suurten käyttäjämäärien testaaminen manuaalisesti on kuitenkin hyvin vaikeaa. Sirpaleisen ympäristön, kuten palveluihin perustuvien Web-sovellusympäristöjen testaaminen on haaste. Tämän työn aiheena on arvioida työkaluja ja menetelmiä, joilla raskaita teollisia Web-sovelluksia voidaan testata. Tavoitteena on löytää testausmenetelmiä, joilla voidaan luotettavasti simuloida suuria käyttäjämääriä. Tavoitteena on myös arvioida erilaisten yhteyksien ja protokollien vaikutusta Web-sovelluksen suorituskykyyn.
Resumo:
Introduction : Décrire les patients d'une structure gériatrique offrant des hospitalisations de courte durée, dans un contexte ambulatoire, pour des situations gériatriques courantes dans le canton de Genève (Suisse). Mesurer les performances de cette structure en termes de qualité des soins et de coûts. Méthodes : Des données relatives au profil des 100 premiers patients ont été collectées (huit mois), ainsi qu'aux prestations, aux ressources et aux effets (réadmissions, décès, satisfaction, complications) de manière à mesurer différents indicateurs de qualité et de coûts. Les valeurs observées ont été systématiquement comparées aux valeurs attendues, calculées à partir du profil des patients. Résultats : Des critères d'admission ont été fixés pour exclure les situations dans lesquelles d'autres structures offrent des soins mieux adaptés. La spécificité de cette structure intermédiaire a été d'assurer une continuité des soins et d'organiser d'emblée le retour à domicile par des prestations de liaison ambulatoire. La faible occurrence des réadmissions potentiellement évitables, une bonne satisfaction des patients, l'absence de décès prématurés et le faible nombre de complications suggèrent que les soins médicaux et infirmiers ont été délivrés avec une bonne qualité. Le coût s'est révélé nettement plus économique que des séjours hospitaliers après ajustement pour la lourdeur des cas. Conclusion : L'expérience-pilote a démontré la faisabilité et l'utilité d'une unité d'hébergement et d'hospitalisation de court séjour en toute sécurité. Le suivi du patient par le médecin traitant assure une continuité des soins et évite la perte d'information lors des transitions ainsi que les examens non pertinents. INTRODUCTION: To describe patients admitted to a geriatric institution, providing short-term hospitalizations in the context of ambulatory care in the canton of Geneva. To measure the performances of this structure in terms of quality ofcare and costs. METHOD: Data related to the clinical,functioning and participation profiles of the first 100 patients were collected. Data related to effects (readmission, deaths, satisfaction, complications), services and resources were also documented over an 8-month period to measure various quality and costindicators. Observed values were systematically compared to expected values, adjusted for case mix. RESULTS: Explicit criteria were proposed to focus on the suitable patients, excluding situations in which other structures were considered to be more appropriate. The specificity of this intermediate structure was to immediately organize, upon discharge, outpatient services at home. The low rate of potentially avoidable readmissions, the high patient satisfaction scores, the absence of premature death and the low number of iatrogenic complications suggest that medical and nursing care delivered reflect a good quality of services. The cost was significantly lower than expected, after adjusting for case mix. CONCLUSION: The pilot experience showed that a short-stay hospitalization unit was feasible with acceptable security conditions. The attending physician's knowledge of the patients allowed this system tofocus on essential issues without proposing inappropriate services.
Resumo:
In this paper we show how a nonlinear preprocessing of speech signal -with high noise- based on morphological filters improves the performance of robust algorithms for pitch tracking (RAPT). This result happens for a very simple morphological filter. More sophisticated ones could even improve such results. Mathematical morphology is widely used in image processing and has a great amount of applications. Almost all its formulations derived in the two-dimensional framework are easily reformulated to be adapted to one-dimensional context
Resumo:
Many strategies for treating diseases require the delivery of drugs into the cell cytoplasm following internalization within endosomal vesicles. Thus, compounds triggered by low pH to disrupt membranes and release endosomal contents into the cytosol are of particular interest. Cationic nanovesicles have attracted considerable interest as effective carriers to improve the delivery of biologically active molecules into and through the skin. In this study, lipid-based nanovesicles containing three different cationic lysine-based surfactants were designed for topical administration. We used representative skin cell lines and in vitro assays to assess whether the cationic compounds modulate the toxic responses of these nanocarriers. The nanovesicles were characterized in both water and cell culture medium. In general, significant agglomeration occurred after 24 h incubation under cell culture conditions. We found different cytotoxic responses among the formulations, which depended on the surfactant,cell line (3T3, HaCaT, and THP-1) and endpoint assayed (MTT, NRU, and LDH). Moreover, no potential phototoxicity was detected in fibroblast or keratinocyte cells, whereas only a slight inflammatory response was induced, as detected by IL-1a and IL-8 production in HaCaT and THP-1 cell lines, respectively. A key finding of our research was that the cationic charge position and the alkyl chain length of the surfactants determine the nanovesicles resulting toxicity. The charge on the a-amino group of lysine increased the depletion of cell metabolic activity, as determined by the MTT assay, while a higher hydrophobicity tends to enhance the toxic responses of the nanovesicles. The insights provided here using different cell lines and assays offer a comprehensive toxicological evaluation of this group of new nanomaterials.
Resumo:
A new family of distortion risk measures -GlueVaR- is proposed in Belles- Sampera et al. -2013- to procure a risk assessment lying between those provided by common quantile-based risk measures. GlueVaR risk measures may be expressed as a combination of these standard risk measures. We show here that this relationship may be used to obtain approximations of GlueVaR measures for general skewed distribution functions using the Cornish-Fisher expansion. A subfamily of GlueVaR measures satisfies the tail-subadditivity property. An example of risk measurement based on real insurance claim data is presented, where implications of tail-subadditivity in the aggregation of risks are illustrated.
Resumo:
OBJECTIVE: To review and update the conceptual framework, indicator content and research priorities of the Organisation for Economic Cooperation and Development's (OECD) Health Care Quality Indicators (HCQI) project, after a decade of collaborative work. DESIGN: A structured assessment was carried out using a modified Delphi approach, followed by a consensus meeting, to assess the suite of HCQI for international comparisons, agree on revisions to the original framework and set priorities for research and development. SETTING: International group of countries participating to OECD projects. PARTICIPANTS: Members of the OECD HCQI expert group. RESULTS: A reference matrix, based on a revised performance framework, was used to map and assess all seventy HCQI routinely calculated by the OECD expert group. A total of 21 indicators were agreed to be excluded, due to the following concerns: (i) relevance, (ii) international comparability, particularly where heterogeneous coding practices might induce bias, (iii) feasibility, when the number of countries able to report was limited and the added value did not justify sustained effort and (iv) actionability, for indicators that were unlikely to improve on the basis of targeted policy interventions. CONCLUSIONS: The revised OECD framework for HCQI represents a new milestone of a long-standing international collaboration among a group of countries committed to building common ground for performance measurement. The expert group believes that the continuation of this work is paramount to provide decision makers with a validated toolbox to directly act on quality improvement strategies.
Resumo:
Performance standards for Positron emission tomography (PET) were developed to be able to compare systems from different generations and manufacturers. This resulted in the NEMA methodology in North America and the IEC in Europe. In practices, the NEMA NU 2- 2001 is the method of choice today. These standardized methods allow assessment of the physical performance of new commercial dedicated PET/CT tomographs. The point spread in image formation is one of the factors that blur the image. The phenomenon is often called the partial volume effect. Several methods for correcting for partial volume are under research but no real agreement exists on how to solve it. The influence of the effect varies in different clinical settings and it is likely that new methods are needed to solve this problem. Most of the clinical PET work is done in the field of oncology. The whole body PET combined with a CT is the standard investigation today in oncology. Despite the progress in PET imaging technique visualization, especially quantification of small lesions is a challenge. In addition to partial volume, the movement of the object is a significant source of error. The main causes of movement are respiratory and cardiac motions. Most of the new commercial scanners are in addition to cardiac gating, also capable of respiratory gating and this technique has been used in patients with cancer of the thoracic region and patients being studied for the planning of radiation therapy. For routine cardiac applications such as assessment of viability and perfusion only cardiac gating has been used. However, the new targets such as plaque or molecular imaging of new therapies require better control of the cardiac motion also caused by respiratory motion. To overcome these problems in cardiac work, a dual gating approach has been proposed. In this study we investigated the physical performance of a new whole body PET/CT scanner with NEMA standard, compared methods for partial volume correction in PET studies of the brain and developed and tested a new robust method for dual cardiac-respiratory gated PET with phantom, animal and human data. Results from performance measurements showed the feasibility of the new scanner design in 2D and 3D whole body studies. Partial volume was corrected, but there is no best method among those tested as the correction also depends on the radiotracer and its distribution. New methods need to be developed for proper correction. The dual gating algorithm generated is shown to handle dual-gated data, preserving quantification and clearly eliminating the majority of contraction and respiration movement
High-Performance-Tensile-Strength Alpha-Grass Reinforced Starch-Based Fully Biodegradable Composites
Resumo:
Though there has been a great deal of work concerning the development of natural fibers in reinforced starch-based composites, there is still more to be done. In general, cellulose fibers have lower strength than glass fibers; however, their specific strength is not far from that of fiberglass. In this work, alpha-fibers were obtained from alpha-grass through a mild cooking process. The fibers were used to reinforce a starch-based biopolymer. Composites including 5 to 35% (w/w) alpha-grass fibers in their formulation were prepared, tested, and subsequently compared with those of wood- and fiberglass-reinforced polypropylene (PP). The term “high-performance” refers to the tensile strength of the studied composites and is mainly due to a good interphase, a good dispersion of the fibers inside the matrix, and a good aspect ratio. The tensile strength of the composites showed a linear evolution for fiber contents up to 35% (w/w). The strain at break of the composites decreased with the fiber content and showed the stiffening effects of the reinforcement. The prepared composites showed high mechanical properties, even approaching those of glass fiber reinforced composites
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
A study was made to evaluate the effect of a castor oil-based detergent on strawberry crops treated with different classes of pesticides, namely deltamethrin, folpet, tebuconazole, abamectin and mancozeb, in a controlled environment. Experimental crops of greenhouse strawberries were cultivated in five different ways with control groups using pesticides and castor oil-based detergent. The results showed that the group 2, which was treated with castor oil-based detergent, presented the lowest amount of pesticide residues and the highest quality of fruit produced.