973 resultados para Hierarchical stochastic learning
Resumo:
Objective To identify the perception of the students about the use of art as a pedagogical strategy in learning the patterns of knowing in nursing; to identify the dimensions of each pattern valued in the analysis of pieces of art. Method Descriptive mixed study. Data collection used a questionnaire applied to 31 nursing students. Results In the analysis of the students’ discourse, it was explicit that empirical knowledge includes scientific knowledge, tradition and nature of care. The aesthetic knowledge implies expressiveness, subjectivity and sensitivity. Self-knowledge, experience, reflective attitude and relationships with others are the subcategories of personal knowledge and the moral and ethics support ethical knowledge. Conclusion It is possible to learn patterns of knowledge through art, especially the aesthetic, ethical and personal. It is necessary to investigate further pedagogical strategies that contribute to the learning patterns of nursing knowledge.
Resumo:
OBJECTIVETo identify the association between the use of web simulation electrocardiography and the learning approaches, strategies and styles of nursing degree students.METHODA descriptive and correlational design with a one-group pretest-posttest measurement was used. The study sample included 246 students in a Basic and Advanced Cardiac Life Support nursing class of nursing degree.RESULTSNo significant differences between genders were found in any dimension of learning styles and approaches to learning. After the introduction of web simulation electrocardiography, significant differences were found in some item scores of learning styles: theorist (p < 0.040), pragmatic (p < 0.010) and approaches to learning.CONCLUSIONThe use of a web electrocardiogram (ECG) simulation is associated with the development of active and reflexive learning styles, improving motivation and a deep approach in nursing students.
Resumo:
OBJECTIVETo evaluate the skills and knowledge of undergraduate students in the health area on cardiopulmonary resuscitation maneuvers with the use of an automatic external defibrillator.METHODThe evaluation was performed in three different stages of the teaching-learning process. A theoretical and practical course was taught and the theoretical classes included demonstration. The evaluation was performed in three different stages of the teaching-learning process. Two instruments were applied to evaluate the skills (30-items checklist) and knowledge (40-questions written test). The sample comprised 84 students.RESULTSAfter the theoretical and practical course, an increase was observed in the number of correct answers in the 30-items checklist and 40-questions written test.CONCLUSIONAfter the theoretical class (including demonstration), only one of the 30-items checklist for skills achieved an index ≥ 90% of correct answers. On the other hand, an index of correct answers greater than 90% was achieved in 26 (86.7%) of the 30 items after a practical training simulation, evidencing the importance of this training in the defibrillation procedure.
Resumo:
RESUMO Objetivo Realizar a adaptação transcultural e a validação da versão de 29-itens daReadiness for Interprofessional Learning Scale (RIPLS) para língua portuguesa falada no Brasil. Método Foram adotadas cinco etapas: três traduções, síntese, três retrotraduções, avaliação por especialistas e pré-teste. A validação contou com 327 estudantes de 13 cursos de graduação de uma universidade pública. Foram realizadas análises paralelas com o software R e a análise fatorial utilizando Modelagem de Equações Estruturais. Resultados A análise fatorial resultou em uma escala de 27 itens e três fatores: Fator 1 – Trabalho em equipe e colaboração com 14 itens (1-9, 12-16), Fator 2 – Identidade profissional, oito itens (10, 11, 17, 19, 21-24), e Fator 3 – Atenção à saúde centrada no paciente, cinco itens (25-29). Alfa de Cronbach dos três fatores foi respectivamente: 0,90; 0,66; 0,75. Análise de variância mostrou diferenças significativas nas médias dos fatores dos grupos profissionais. Conclusão Foram identificadas evidências de validação da versão em português da RIPLS em sua aplicação no contexto nacional.
Resumo:
In this paper we proose the infimum of the Arrow-Pratt index of absoluterisk aversion as a measure of global risk aversion of a utility function.We then show that, for any given arbitrary pair of distributions, thereexists a threshold level of global risk aversion such that all increasingconcave utility functions with at least as much global risk aversion wouldrank the two distributions in the same way. Furthermore, this thresholdlevel is sharp in the sense that, for any lower level of global riskaversion, we can find two utility functions in this class yielding oppositepreference relations for the two distributions.
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
We construct an uncoupled randomized strategy of repeated play such that, if every player follows such a strategy, then the joint mixed strategy profiles converge, almost surely, to a Nash equilibrium of the one-shot game. The procedure requires very little in terms of players' information about the game. In fact, players' actions are based only on their own past payoffs and, in a variant of the strategy, players need not even know that their payoffs are determined through other players' actions. The procedure works for general finite games and is based on appropriate modifications of a simple stochastic learningrule introduced by Foster and Young.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.