943 resultados para TECUP - Test-bed implementation of the Ecup framework
Resumo:
This study aims to compare a psychological evaluation test to classical psychoanalysis in infertile women. Two hundred women were submitted to the Psychological Evaluation Test (PET). The sum of the scores for the responses ranged from 15 to 60 points, with scores 30 points being defined as 'psycho-emotional maladjustment' (cut-off point: median + 25%). For comparison, the patients were simultaneously submitted to a psychological examination by a psychologist, who was unaware of the PET results. of the 200 patients, 66 (33%) presented a test with greater than or equal to30 points ('psycho-emotional maladjustment') and 134 (67%) a test with <30 points (normal). Upon psychological examination, 105 (52.5%) presented an abnormal evaluation and 95 (47.5%) a normal evaluation. For the PET, statistical analysis showed 82% efficiency, 62% sensitivity, 98% positive predictive value, 99% specificity, 70% negative predictive value, likelihood ratio for a positive test result 62, and likelihood ratio for negative test result 0.38. The PET proved to be a useful clinical instrument, being of help in the selection of patients with psychological needs induced by infertility.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Includes bibliography
Resumo:
Valency is an inherent property of nominalizations representing higher-order entities, and as such it should be included in their underlying representation. On the basis of this assumption, I postulate that cases of non-overt arguments, which are very common in Brazilian Portuguese and in many other languages of the world, should be considered a special type of valency realization. This paper aims to give empirical support to this postulate by showing that non-overt arguments are both semantically and pragmatically motivated. The semantic and pragmatic motivations for non-overt arguments may be accounted for by the dynamic implementation of the FDG model. I argue that the way valency is realized by means of non-overt arguments suggests a strong parallelism between nominalizations and other types of non-finite embedded constructions – like infinitival and participial ones. By providing empirical evidence for this parallelism I arrive at the conclusion that there are at least three kinds of non-finite embedded constructions, rather than only two, as suggested by Dik (1997).
Resumo:
Even in the present, pressure ulcers still represent a severe health problem, particularly in Intensive Care Units (ICU). This study assesses the implementation of a protocol to prevent pressure ulcers in ICU inpatients. This prospective, descriptive and exploratory study verifies the incidence of pressure ulcers following the implementation of a prevention protocol. Data were collected from April 17th to July 15th 2009. The incidence observed in this study (23.1%) was below that reported in a similar study developed in the same institution (41.02%) before the implementation of the protocols to assess risk and prevent pressure ulcers. The prevention protocols are essential tools that have an impact on controlling the incidence of pressure ulcers, when used consistently.
Resumo:
Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.
Resumo:
Among the scientific objectives addressed by the Radio Science Experiment hosted on board the ESA mission BepiColombo is the retrieval of the rotational state of planet Mercury. In fact, the estimation of the obliquity and the librations amplitude were proven to be fundamental for constraining the interior composition of Mercury. This is accomplished by the Mercury Orbiter Radio science Experiment (MORE) via a strict interaction among different payloads thus making the experiment particularly challenging. The underlying idea consists in capturing images of the same landmark on the surface of the planet in different epochs in order to observe a displacement of the identified features with respect to a nominal rotation which allows to estimate the rotational parameters. Observations must be planned accurately in order to obtain image pairs carrying the highest information content for the following estimation process. This is not a trivial task especially in light of the several dynamical constraints involved. Another delicate issue is represented by the pattern matching process between image pairs for which the lowest correlation errors are desired. The research activity was conducted in the frame of the MORE rotation experiment and addressed the design and implementation of an end-to-end simulator of the experiment with the final objective of establishing an optimal science planning of the observations. In the thesis, the implementation of the singular modules forming the simulator is illustrated along with the simulations performed. The results obtained from the preliminary release of the optimization algorithm are finally presented although the software implemented is only at a preliminary release and will be improved and refined in the future also taking into account the developments of the mission.
Resumo:
Der Einsatz von Penningfallen in der Massenspektrometrie hat zu einem einmaligen Genauigkeitssprung geführt. Dadurch wurden Massenwerte verschiedenster Atome zu wichtigen Eingangsparametern bei immer mehr physikalischen Fragestellungen. Die Massenspektrometrie mit Hilfe von Penningfallen basiert auf der Bestimmung der freien Zyklotronfrequenz eines Ions in einem homogenen Magnetfeld νc=qB/(2πm). Sie wird mit Flugzeitmethode (TOF-ICR) bestimmt, wobei eine relative Massenungenauigkeit δm/m von wenigen 10^-9 bei Nukliden mit Lebensdauern von <500 ms erreicht wird. Dies wurde durch die im Rahmen dieser Arbeit erstmals in der Penningfallen-Massenspektrometrie eingesetzten Ramsey-Methode möglich. Dabei werden zeitlich separierte, oszillierenden Feldern zur resonanten Ionenanregung genutzt, um die Frequenzmessung durch die Flugzeitmethode zu verbessern. Damit wurden am Penningfallenmassenspektrometer ISOLTRAP an ISOLDE/CERN die Massen der Nuklide 26,27Al und 38,39Ca bestimmt. Alle Massen wurden in die „Atomic Mass Evaluation“ eingebettet. Die Massenwerte von 26Al und 38Ca dienten insbesondere zu Tests des Standardmodells. Um mit Massenwerten fundamentale Symmetrien oder die Quantenelektrodynamik (QED) in extremen Feldern zu testen wurde ein neues Penningfallenprojekt (PENTATRAP) für hochpräzise Massenmessungen an hochgeladenen Ionen konzipiert. In dieser Doktorarbeit wurde vornehmlich die Entwicklung der Penningfallen betrieben. Eine Neuerung bei Penningfallenexperimenten ist dabei die permanente Beobachtung des Magnetfeldes B und seiner zeitlichen Fluktuationen durch so genannte „Monitorfallen“.
Resumo:
This research seeks to review the level of knowledge achieved in interpreting the relationship between the ethnic diversity at the workplace in the public sector and the organizational performance; as well as seeks to contribute in understanding the implications of this relationship. The study commenced with investigating the academic research in the relevant area addressing the following research questions: (a) How are diversity management and organizational performance conceptualized? (b) What are the existing findings of research concerning diversity at the workplace in the public organizations and organizational performance? (c) What factors intervene the relationship between the diversity and organizational performance? Based on the findings from the review of the academic research, this study seeks to contribute in understanding the ethnic diversity – performance relationship and its mplications at the local level in the Macedonian context. The reform process in Macedonia as a multicultural society, where for many years, inter-ethnic relations have been one of the most sensitive political issues, affecting both the stability of the country and the progress, focused mainly on the implementation of the decentralization and inclusion of ethnic minorities in the decision making process. With the implementation of the Ohrid Framework Agreement workforce at the units of local self-government in Republic of Macedonia is becoming more balanced with respect to ethnic minorities, with more workforce participation than ever by Albanians, Turks, Roma and other minorities. As public organizations at local level become more diverse along ethnic lines, it makes sense to pay more attention to how different ethnic groups interact with one another at work. Thus it gives additional importance on the research question addressed in the study and gives significance of the research in a broader scope.
Resumo:
KIVA is a FORTRAN code developed by Los Alamos national lab to simulate complete engine cycle. KIVA is a flow solver code which is used to perform calculation of properties in a fluid flow field. It involves using various numerical schemes and methods to solve the Navier-Stokes equation. This project involves improving the accuracy of one such scheme by upgrading it to a higher order scheme. The numerical scheme to be modified is used in the critical final stage calculation called as rezoning phase. The primitive objective of this project is to implement a higher order numerical scheme, to validate and verify that the new scheme is better than the existing scheme. The latest version of the KIVA family (KIVA 4) is used for implementing the higher order scheme to support handling the unstructured mesh. The code is validated using the traditional shock tube problem and the results are verified to be more accurate than the existing schemes in reference with the analytical result. The convection test is performed to compare the computational accuracy on convective transfer; it is found that the new scheme has less numerical diffusion compared to the existing schemes. A four valve pentroof engine, an example case of KIVA package is used as application to ensure the stability of the scheme in practical application. The results are compared for the temperature profile. In spite of all the positive results, the numerical scheme implemented has a downside of consuming more CPU time for the computational analysis. The detailed comparison is provided. However, in an overview, the implementation of the higher order scheme in the latest code KIVA 4 is verified to be successful and it gives better results than the existing scheme which satisfies the objective of this project.
Resumo:
An agency is accountable to a legislative body in the implementation of public policy. It has a responsibility to ensure that the implementation of that policy is consistent with its statutory objectives.^ The analysis of the effectiveness of implementation of the Vendor Drug Program proceeded in the following manner. The federal and state roles and statutes pursuant to the formulation of the Vendor Drug Program were reviewed to determine statutory intent and formal provisions. The translation of these into programmatic details was examined focusing on the factors impacting the implementation process. Lastly, the six conditions outlined by Mazmanian and Sabatier as criteria for effective implementation, were applied to the implementation of the Vendor Drug Program to determine if the implementation was effective in relation to consistency with statutory objectives.^ The implementation of the statutes clearly met four of the six conditions for effective implementation: (1) clear and consistent objectives; (2) a valid causal theory; (3) structured the process to maximize agency and target compliance with the objectives; and (4) had continued support of constituency groups and sovereigns.^ The implementation was basically consistent with the statutory objectives, although the determination of vendor reimbursement has had and continues to have problems. ^
Resumo:
The evolution of pharmaceutical care is identified through a complete review of the literature published in the American Journal of Health-System Pharmacy, the sole comprehensive publication of institutional pharmacy practice. The evolution is categorized according to characteristics of structure (organizational structure, the role of the pharmacist), process (drug delivery systems, formulary management, acquiring drug products, methods to impact drug therapy decisions), and outcomes (cost of drug delivery, cost of drug acquisition and use, improved safety, improved health outcomes) recorded from the 1950s through the 1990s. While significant progress has been made in implementing basic drug distribution systems, levels of pharmacy involvement with direct patient care is still limited.^ A new practice framework suggests enhanced direct patient care involvement through increase in the efficiency and effectiveness of traditional pharmacy services. Recommendations advance internal and external organizational structure relationships that position pharmacists to fully use their unique skills and knowledge to impact drug therapy decisions and outcomes. Specific strategies facilitate expansion of the breadth and scope of each process component in order to expand the depth of integration of pharmacy and pharmaceutical care within the broad healthcare environment. Economic evaluation methods formally evaluate the impact of both operational and clinical interventions.^ Outcome measurements include specific recommendations and methods to increase efficiency of drug acquisition, emphasizing pharmacists' roles that impact physician prescribing decisions. Effectiveness measures include those that improve safety of drug distribution systems, decrease the potential of adverse drug therapy events, and demonstrate that pharmaceutical care can significantly contribute to improvement in overall health status.^ The implementation of the new framework is modeled on a case study at the M.D. Anderson Cancer Center. The implementation of several new drug distribution methods facilitated the redeployment of personnel from distributive functions to direct patient care activities with significant personnel and drug cost reduction. A cost-benefit analysis illustrates that framework process enhancements produced a benefit-to-cost ratio of 7.9. In addition, measures of effectiveness demonstrated significant levels of safety and enhanced drug therapy outcomes. ^
Resumo:
Health care providers face the problem of trying to make decisions with inadequate information and also with an overload of (often contradictory) information. Physicians often choose treatment long before they know which disease is present. Indeed, uncertainty is intrinsic to the practice of medicine. Decision analysis can help physicians structure and work through a medical decision problem, and can provide reassurance that decisions are rational and consistent with the beliefs and preferences of other physicians and patients. ^ The primary purpose of this research project is to develop the theory, methods, techniques and tools necessary for designing and implementing a system to support solving medical decision problems. A case study involving “abdominal pain” serves as a prototype for implementing the system. The research, however, focuses on a generic class of problems and aims at covering theoretical as well as practical aspects of the system developed. ^ The main contributions of this research are: (1) bridging the gap between the statistical approach and the knowledge-based (expert) approach to medical decision making; (2) linking a collection of methods, techniques and tools together to allow for the design of a medical decision support system, based on a framework that involves the Analytic Network Process (ANP), the generalization of the Analytic Hierarchy Process (AHP) to dependence and feedback, for problems involving diagnosis and treatment; (3) enhancing the representation and manipulation of uncertainty in the ANP framework by incorporating group consensus weights; and (4) developing a computer program to assist in the implementation of the system. ^
Resumo:
Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented. Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure. Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination. Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking. Keywords: national final examination, licensing examination, summative assessment, OSCE, action research
Resumo:
Introduction: The Texas Occupational Safety & Health Surveillance System (TOSHSS) was created to collect, analyze and interpret occupational injury and illness data in order to decrease the impact of occupational injuries within the state of Texas. This process evaluation was performed midway through the 4-year grant to assess the efficiency and effectiveness of the surveillance system’s planning and implementation activities1. ^ Methods: Two evaluation guidelines published by the Centers for Disease Control and Prevention (CDC) were used as the theoretical models for this process evaluation. The Framework for Program Evaluation in Public Health was used to examine the planning and design of TOSHSS using logic models. The Framework for Evaluating Public Health Surveillance Systems was used to examine the implementation of approximately 60 surveillance activities, including uses of the data obtained from the surveillance system. ^ Results/Discussion: TOSHSS planning activities omitted the creation of a scientific advisory committee and specific activities designed to maintain contacts with stakeholders; and proposed activities should be reassessed and aligned with ongoing performance measurement criteria, including the role of collaborators in helping the surveillance system achieve each proposed activity. TOSHSS implementation activities are substantially meeting expectations and received an overall score of 61% for all activities being performed. TOSHSS is considered a surveillance system that is simple, flexible, acceptable, fairly stable, timely, moderately useful, with good data quality and a PVP of 86%. ^ Conclusions: Through the third year of TOSHSS implementation, the surveillance system is has made a considerable contribution to the collection of occupational injury and illness information within the state of Texas. Implementation of the nine recommendations provided under this process evaluation is expected to increase the overall usefulness of the surveillance system and assist TDSHS in reducing occupational fatalities, injuries, and diseases within the state of Texas. ^ 1 Disclaimer: The Texas Occupational Safety and Health Surveillance System is supported by Grant/Cooperative Agreement Number (U60 OH008473-01A1). The content of the current evaluation are solely the responsibility of the authors and do not necessarily represent the official views of the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health.^