932 resultados para Models and Principles


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: to analyze what nursing models and nursing assessment structures have been used in the implementation of the nursing process at the public and private centers in the health area Gipuzkoa (Basque Country). Method: a retrospective study was undertaken, based on the analysis of the nursing records used at the 158 centers studied. Results: the Henderson model, Carpenito's bifocal structure, Gordon's assessment structure and the Resident Assessment Instrument Nursing Home 2.0 have been used as nursing models and assessment structures to implement the nursing process. At some centers, the selected model or assessment structure has varied over time. Conclusion: Henderson's model has been the most used to implement the nursing process. Furthermore, the trend is observed to complement or replace Henderson's model by nursing assessment structures.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates several approaches to bootstrapping a new spoken language understanding (SLU) component in a target language given a large dataset of semantically-annotated utterances in some other source language. The aim is to reduce the cost associated with porting a spoken dialogue system from one language to another by minimising the amount of data required in the target language. Since word-level semantic annotations are costly, Semantic Tuple Classifiers (STCs) are used in conjunction with statistical machine translation models both of which are trained from unaligned data to further reduce development time. The paper presents experiments in which a French SLU component in the tourist information domain is bootstrapped from English data. Results show that training STCs on automatically translated data produced the best performance for predicting the utterance's dialogue act type, however individual slot/value pairs are best predicted by training STCs on the source language and using them to decode translated utterances. © 2010 ISCA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study a total of 75 species were identified, from which 17 species, 9 genes and 6 families; belonged to Green Algae, 18 species, 7 genes and 4 families; belonged to Brown Algae, and 40 species, 18 genes and 11 families; belonged to Red Algae. From total times spent for sampling, it was determined that at lengeh harbor with 6 species, had the lowest diversity of green algae. The species diversity of brown algae at Michael location with 10 species each; had the highest, and Tahooneh location with 5 species; had the lowest species diversity. Species diversity of red algae at Michael location with 28 species; had the highest, and Sayeh Khosh location with 13 species; had the lowest diversity. From all locations where sampling took place, the highest species diversity regarding Time and Space for all three groups of algae; were associated to Late February (20th. Feb. ), and late March(20th. March). Coverage data of macroalgae and Ecological Evaluation Index indicate a high level of eutrophication for the Saieh khosh, and Bostaneh, They are classified as zones with a bad and poor ecological status. It has been proved that concentrations of biogenic elements and phytoplankton blooming are higher in these zones. The best values of the estimated metrics at Tahooneh and Michaeil could be explained with the good ecological conditions in that zone and the absence of pollution sources close to that transect . The values of abundance of macroalgae and Ecological Evaluation Index indicate a moderate ecological conditions for the Koohin, Lengeh and Chirooieh.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. © 2010 Association for Computational Linguistics.