990 resultados para Observational learning
Resumo:
Abstract OBJECTIVE To determine time standards for interventions and activities conducted by nursing professionals in Family Health Units (FHU) in Brazil to substantiate the calculation of work force. METHOD This was an observational study carried out in 27 FHU, in 12 municipalities in 10 states, in 2013. In each unit, nursing professionals were observed every 10 minutes, for eight work hours, on five consecutive days via the work sampling technique. RESULTS A total of 32,613 observations were made, involving 47 nurses and 93 nursing technicians/assistants. Appointments were the main intervention carried out by nurses, with a mean time of 25.3 minutes, followed by record-keeping, which corresponded to 9.7%. On average, nursing technicians/assistants spent 6.3% of their time keeping records and 30.6 intervention minutes on immunization/vaccination control. CONCLUSION The study resulted in standard times of interventions carried out by the FHU nursing team, which can underpin the determination of nursing staff size and human resource policies. Furthermore, the study showed the panorama of interventions currently employed, allowing for the work process to be reviewed and optimized.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
Much medical research is observational. The reporting of observational studies is often of insufficient quality. Poor reporting hampers the assessment of the strengths and weaknesses of a study and the generalisability of its results. Taking into account empirical evidence and theoretical considerations, a group of methodologists, researchers, and editors developed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations to improve the quality of reporting of observational studies. The STROBE Statement consists of a checklist of 22 items, which relate to the title, abstract, introduction, methods, results and discussion sections of articles. Eighteen items are common to cohort studies, case-control studies and cross-sectional studies and four are specific to each of the three study designs. The STROBE Statement provides guidance to authors about how to improve the reporting of observational studies and facilitates critical appraisal and interpretation of studies by reviewers, journal editors and readers. This explanatory and elaboration document is intended to enhance the use, understanding, and dissemination of the STROBE Statement. The meaning and rationale for each checklist item are presented. For each item, one or several published examples and, where possible, references to relevant empirical studies and methodological literature are provided. Examples of useful flow diagrams are also included. The STROBE Statement, this document, and the associated Web site (http://www.strobe-statement.org/) should be helpful resources to improve reporting of observational research.
Resumo:
Learning ability can be substantially improved by artificial selection in animals ranging from Drosophila to rats. Thus these species have not used their evolutionary potential with respect to learning ability, despite intuitively expected and experimentally demonstrated adaptive advantages of learning. This suggests that learning is costly, but this notion has rarely been tested. Here we report correlated responses of life-history traits to selection for improved learning in Drosophila melanogaster. Replicate populations selected for improved learning lived on average 15% shorter than the corresponding unselected control populations. They also showed a minor reduction in fecundity late in life and possibly a minor increase in dry adult mass. Selection for improved learning had no effect on egg-to-adult viability, development rate, or desiccation resistance. Because shortened longevity was the strongest correlated response to selection for improved learning, we also measured learning ability in another set of replicate populations that had been selected for extended longevity. In a classical olfactory conditioning assay, these long-lived flies showed an almost 40% reduction in learning ability early in life. This effect disappeared with age. Our results suggest a symmetrical evolutionary trade-off between learning ability and longevity in Drosophila.
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
Le ROTEM est un test de coagulation réalisable au près du malade qui permet d'objectiver la coagulopathie, de distinguer la contribution des différents éléments du système de coagulation et de cibler les produits procoagulants comme le plasma frais congelé (PFC), les plaquettes, le fibrinogène et les facteurs de coagulation purifiés ou les antifibrinolytiques. 3 des tests disponibles pour le ROTEM sont: EXTEM, INTEM, HEPTEM. Le premier test est stable sous hautes doses d'héparine alors que le deuxième est très sensible à sa présence. Dans le dernier test on rajoute de l'héparinase pour mettre en évidence l'éventuel effet résiduel de l'héparine en le comparant à l'INTEM. Idéalement, le ROTEM devrait être effectué avant la fin du bypass cardiopulmonaire (CEC), donc sous anticoagulation maximale pas héparine, afin de pouvoir administrer des produits pro¬coagulants dans les délais les plus brefs et ainsi limiter au maximum les pertes sanguines. En effet la commande et la préparation de certains produits procoagulants peut prendre plus d'une heure. Le but de cette étude est de valider l'utilisation du ROTEM en présence de hautes concentrations d'héparine. Il s'agit d'une étude observationnelle prospective sur 20 patients opérés électivement de pontages aorto-coronariens sous CEC. Méthode : l'analyse ROTEM a été réalisée avant l'administration d'héparine (TO), 10 minutes après l'administration d'héparine (Tl), à la fin de la CEC (T2) et 10 minutes après la neutralisation de l'anticoagulation avec la protamine (T3). L'état.d'héparinisation a été évalué par l'activité anti-Xa à T1,T2,T3. Résultats : Comparé à TO, la phase de polymérisation de la cascade de coagulation et l'interaction fibrine-plaquettes sont significativement détériorées par rapport à Tl pour les canaux EXTEM et HEPTEM. A T2 l'analyse EXTEM et INTEM sont comparables à celles de EXTEM et HEPTEM à T3. Conclusion: les hautes doses d'héparine utilisées induisent une coagulopathie qui reste stable durant toute la durée de la CEC et qui persiste même après la neutralisation de l'anticoagulation. Les mesures EXTEM et HEPTEM sont donc valides en présence de hautes concentrations d'héparine et peuvent être réalisés pendant la CEC avant l'administration de protamine.