970 resultados para Idea of approaches to a number


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Many children are requiring tube weaning intervention as a result of increased survival rates of high risk infants and the temporary use of feeding tubes. This study aimed to describe service delivery models and treatment approaches in a variety of paediatric feeding/tube weaning programs. METHOD: A questionnaire on tube weaning was formulated based on a literature review. Purposive maximum variation sampling was used to include feeding/ weaning programs operating in a variety of settings and countries. Eight feeding teams in Australia, Europe and the USA agreed to participate and completed the questionnaire. RESULT: All teams employed sensori-motor interventions, with the majority also offering psychological interventions. Six of eight teams utilised hunger induction during the initiation of tube weaning, and in many cases this preceded eating skill development or controlled sensory modulation. CONCLUSION: A multi-model tube weaning approach is commonly adopted by many centres worldwide. In many cases, psychological theory and theoretical orientation is fundamental to tube weaning practice. Further investigation regarding the efficacy and effectiveness of weaning interventions is recommended to ensure clinical practice is based on sound evidence. This may present as a challenge given many interventions occur concomitantly and the psychotherapeutic experience is difficult to evaluate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta investigación estudia la influencia de la comprensión de la aproximación a un número y de los modos de representación en la construcción de la concepción dinámica del límite en estudiantes de Bachillerato. El análisis de realizó usando el análisis implicativo (Gras, Suzuki, Guillet y Spagnolo, 2008). Los resultados indican que la construcción paulatina de la concepción dinámica del límite se realiza mediante procesos diferenciados de aproximación en el dominio y en el rango, y, dentro de estos últimos, aquellos en los que las aproximaciones laterales coinciden de las que no coinciden. Además, nuestros resultados indican que el modo numérico o el modo algebraico-numérico desempeñan un papel relevante en el desarrollo de la comprensión de la concepción dinámica de límite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performance measurement in Australian philanthropic foundations is a hot topic. Foundation staff and board members are concerned with striking the right balance between their need for information with which to assess the effectiveness of their grant-making programs, and the costs in both time and money for grantees. Influenced by normative pressures, the increasing size and professionalism of the Australian philanthropic sector, and trends from the U.S.A and the U.K, foundations are talking amongst themselves, seeking expert advice and training, consulting with grantees and trying different approaches. Many resources examine methods of data collection, measurement or analysis. Our study instead treads into less charted but important territory: the motivations and values that are shaping the debate about performance measurement. In a series of 40 interviews with foundations from Queensland, New South Wales, Victoria and South Australia, we asked whether they felt under pressure to measure performance and if so, why. We queried whether everyone in the foundation shared the same views on the purposes of performance measurement; and the ways in which the act of performance measurement changed their grant-making, their attitude to risk, their relationship with grantees and their collaborations with other funders. Unsurprisingly, a very diverse set of approaches to performance measurement were revealed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current views of the nature of knowledge and of learning suggest that instructional approaches in science education pay closer attention to how students learn rather than on teaching. This study examined the use of approaches to teaching science based on two contrasting perspectives in learning, social constructivist and traditional, and the effects they have on students' attitudes and achievement. Four categories of attitudes were measured using the Upper Secondary Attitude Questionnaire: Attitude towards school, towards the importance of science, towards science as a career, and towards science as a subject in school. Achievement was measured by average class grades and also with a researcher/teacher constructed 30-item test that involved three sub-scales of items based on knowledge, and applications involving near-transfer and far-transfer of concepts. The sample consisted of 202 students in nine intact classrooms in chemistry at a large high school in Miami, Florida, and involved two teachers. Results were analyzed using a two-way analysis of covariance (ANCOVA) with a pretest in attitude as the covariate for attitudes and prior achievement as the covariate for achievement. A comparison of the adjusted mean scores was made between the two groups and between females and males. ^ With constructivist-based teaching, students showed more favorable attitude towards science as a subject, obtained significantly higher scores in class achievement, total achievement and achievement on the knowledge sub-scale of the knowledge and application test. Students in the traditional group showed more favorable attitude towards school. Females showed significantly more positive attitude towards the importance of science and obtained significantly higher scores in class achievement. No significant interaction effects were obtained for method of instruction by gender. ^ This study lends some support to the view that constructivist-based approaches to teaching science is a viable alternative to traditional modes of teaching. It is suggested that in science education, more consideration be given to those aspects of classroom teaching that foster closer coordination between social influences and individual learning. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation was to explore teaching in higher education from the teachers’ perspective. Two of the four studies analysed the effect of pedagogical training on approaches to teaching and on self-efficacy beliefs of teachers on teaching. Of these two studies, Study I analysed the effect of pedagogical training by applying a cross-sectional setting. The results showed that short training made teachers less student-centred and decreased their self-efficacy beliefs, as reported by the teachers themselves. However, more constant training enhanced the adoption of a student-centred approach to teaching and increased the self-efficacy beliefs of teachers as well. The teacher-focused approach to teaching was more resistant to change. Study II, on the other hand, applied a longitudinal setting. The results implied that among teachers who had not acquired more pedagogical training after Study II there were no changes in the student-focused approach scale between the measurements. However, teachers who had participated in further pedagogical training scored significantly higher on the scale measuring the student-focused approach to teaching. There were positive changes in the self-efficacy beliefs of teachers among teachers who had not participated in further training as well as among those who had. However, the analysis revealed that those teachers had the least teaching experience. Again, the teacher-focused approach was more resistant to change. Study III analysed approaches to teaching qualitatively by using a large and multidisciplinary sample in order to capture the variation in descriptions of teaching. Two broad categories of description were found: the learning-focused and the content-focused approach to teaching. The results implied that the purpose of teaching separates the two categories. In addition, the study aimed to identify different aspects of teaching in the higher-education context. Ten aspects of teaching were identified. While Study III explored teaching on a general level, Study IV analysed teaching on an individual level. The aim was to explore consonance and dissonance in the kinds of combinations of approaches to teaching university teachers adopt. The results showed that some teachers were clearly and systematically either learning- or content-focused. On the other hand, profiles of some teachers consisted of combinations of learning- and content-focused approaches or conceptions making their profiles dissonant. Three types of dissonance were identified. The four studies indicated that pedagogical training organised for university teachers is needed in order to enhance the development of their teaching. The results implied that the shift from content-focused or dissonant profiles towards consonant learning-focused profiles is a slow process and that teachers’ conceptions of teaching have to be addressed first in order to promote learning-focused teaching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The adequacy and efficiency of existing legal and regulatory frameworks dealing with corporate phoenix activity have been repeatedly called into question over the past two decades through various reviews, inquiries, targeted regulatory operations and the implementation of piecemeal legislative reform. Despite these efforts, phoenix activity does not appear to have abated. While there is no law in Australia that declares ‘phoenix activity’ to be illegal, the behaviour that tends to manifest in phoenix activity can be capable of transgressing a vast array of law, including for example, corporate law, tax law, and employment law. This paper explores the notion that the persistence of phoenix activity despite the sheer extent of this law suggests that the law is not acting as powerfully as it might as a deterrent. Economic theories of entrepreneurship and innovation can to some extent explain why this is the case and also offer a sound basis for the evaluation and reconsideration of the existing law. The challenges facing key regulators are significant. Phoenix activity is not limited to particular corporate demographic: it occurs in SMEs, large companies and in corporate groups. The range of behaviour that can amount to phoenix activity is so broad, that not all phoenix activity is illegal. This paper will consider regulatory approaches to these challenges via analysis of approaches to detection and enforcement of the underlying law capturing illegal phoenix activity. Remedying the mischief of phoenix activity is of practical importance. The benefits include continued confidence in our economy, law that inspires best practice among directors, and law that is articulated in a manner such that penalties act as a sufficient deterrent and the regulatory system is able to detect offenders and bring them to account. Any further reforms must accommodate and tolerate legal phoenix activity, at least to some extent. Even then, phoenix activity pushes tolerance of repeated entrepreneurial failure to its absolute limit. The more limited liability is misused and abused, the stronger the argument to place some restrictions on access to limited liability. This paper proposes that such an approach is a legitimate next step for a robust and mature capitalist economy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Skin cancer is one of the most commonly occurring cancer types, with substantial social, physical, and financial burdens on both individuals and societies. Although the role of UV light in initiating skin cancer development has been well characterized, genetic studies continue to show that predisposing factors can influence an individual's susceptibility to skin cancer and response to treatment. In the future, it is hoped that genetic profiles, comprising a number of genetic markers collectively involved in skin cancer susceptibility and response to treatment or prognosis, will aid in more accurately informing practitioners' choices of treatment. Individualized treatment based on these profiles has the potential to increase the efficacy of treatments, saving both time and money for the patient by avoiding the need for extensive or repeated treatment. Increased treatment responses may in turn prevent recurrence of skin cancers, reducing the burden of this disease on society. Currently existing pharmacogenomic tests, such as those that assess variation in the metabolism of the anticancer drug fluorouracil, have the potential to reduce the toxic effects of anti-tumor drugs used in the treatment of non-melanoma skin cancer (NMSC) by determining individualized appropriate dosage. If the savings generated by reducing adverse events negate the costs of developing these tests, pharmacogenomic testing may increasingly inform personalized NMSC treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"The German word for experience - Erlebnis - the experience of the life, to live through something - underpins this book: making visible scholarly opportunities for richer and deeper contextualizations and examinations of the lived-world experiences of people in everyday contexts as they be, do and become." (Ross Todd, Preface). Information experience is a burgeoning area of research and still unfolding as an explicit research and practice theme. This book is therefore very timely as it distils the reflections of researchers and practitioners from various disciplines, with interests ranging across information, knowledge, user experience, design and education. They cast a fresh analytical eye on information experience, whilst approaching the idea from diverse perspectives. Information Experience brings together current thinking about the idea of information experience to help form discourse around it and establish a conceptual foundation for taking the idea forward. It therefore "provides a number of theoretical lenses for examining people's information worlds in more holistic and dynamic ways." (Todd, Preface)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (LBV). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The Researching Effective Approaches to Cleaning in Hospitals (REACH) study will generate evidence about the effectiveness and cost-effectiveness of a novel cleaning initiative that aims to improve the environmental cleanliness of hospitals. The initiative is an environmental cleaning bundle, with five interdependent, evidence-based components (training, technique, product, audit and communication) implemented with environmental services staff to enhance hospital cleaning practices. Methods/design The REACH study will use a stepped-wedge randomised controlled design to test the study intervention, an environmental cleaning bundle, in 11 Australian hospitals. All trial hospitals will receive the intervention and act as their own control, with analysis undertaken of the change within each hospital based on data collected in the control and intervention periods. Each site will be randomised to one of the 11 intervention timings with staggered commencement dates in 2016 and an intervention period between 20 and 50 weeks. All sites complete the trial at the same time in 2017. The inclusion criteria allow for a purposive sample of both public and private hospitals that have higher-risk patient populations for healthcare-associated infections (HAIs). The primary outcome (objective one) is the monthly number of Staphylococcus aureus bacteraemias (SABs), Clostridium difficile infections (CDIs) and vancomycin resistant enterococci (VRE) infections, per 10,000 bed days. Secondary outcomes for objective one include the thoroughness of hospital cleaning assessed using fluorescent marker technology, the bio-burden of frequent touch surfaces post cleaning and changes in staff knowledge and attitudes about environmental cleaning. A cost-effectiveness analysis will determine the second key outcome (objective two): the incremental cost-effectiveness ratio from implementation of the cleaning bundle. The study uses the integrated Promoting Action on Research Implementation in Health Services (iPARIHS) framework to support the tailored implementation of the environmental cleaning bundle in each hospital. Discussion Evidence from the REACH trial will contribute to future policy and practice guidelines about hospital environmental cleaning. It will be used by healthcare leaders and clinicians to inform decision-making and implementation of best-practice infection prevention strategies to reduce HAIs in hospitals. Trial registration Australia New Zealand Clinical Trial Registry ACTRN12615000325​505

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Maillard reaction comprises a complex network of reactions which has proven to be of great importance in both food science and medicine. The majority of methods developed for studying the Maillard reaction in food have focused on model systems containing amino acids and monosaccharides. In this study, a number of electrophoretic techniques, including two-dimensional gel electrophoresis and capillary electrophoresis, are presented. These have been developed specifically for the analysis of the Maillard reaction of food proteins, and are giving important insights into this complex process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nous avons mis au point une approche novatrice pour la synthèse d’un matériau de cathode pour les piles lithium-ion basée sur la décomposition thermique de l’urée. Les hydroxydes de métal mixte (NixMnxCo(1-2x)(OH)2) ont été préparés (x = 0.00 à 0.50) et subséquemment utilisés comme précurseurs à la préparation de l’oxyde de métal mixte (LiNixMnxCo(1-2x)O2). Ces matériaux, ainsi que le phosphate de fer lithié (LiFePO4), sont pressentis comme matériaux de cathode commerciaux pour la prochaine génération de piles lithium-ion. Nous avons également développé un nouveau traitement post-synthèse afin d’améliorer la morphologie des hydroxydes. L’originalité de l’approche basée sur la décomposition thermique de l’urée réside dans l’utilisation inédite des hydroxydes comme précurseurs à la préparation d’oxydes de lithium mixtes par l’intermédiaire d’une technique de précipitation uniforme. De plus, nous proposons de nouvelles techniques de traitement s’adressant aux méthodes de synthèses traditionnelles. Les résultats obtenus par ces deux méthodes sont résumés dans deux articles soumis à des revues scientifiques. Tous les matériaux produits lors de cette recherche ont été analysés par diffraction des rayons X (DRX), microscope électronique à balayage (MEB), analyse thermique gravimétrique (ATG) et ont été caractérisés électrochimiquement. La performance électrochimique (nombre de cycles vs capacité) des matériaux de cathode a été conduite en mode galvanostatique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feed samples received by commercial analytical laboratories are often undefined or mixed varieties of forages, originate from various agronomic or geographical areas of the world, are mixtures (e.g., total mixed rations) and are often described incompletely or not at all. Six unified single equation approaches to predict the metabolizable energy (ME) value of feeds determined in sheep fed at maintenance ME intake were evaluated utilizing 78 individual feeds representing 17 different forages, grains, protein meals and by-product feedstuffs. The predictive approaches evaluated were two each from National Research Council [National Research Council (NRC), Nutrient Requirements of Dairy Cattle, seventh revised ed. National Academy Press, Washington, DC, USA, 2001], University of California at Davis (UC Davis) and ADAS (Stratford, UK). Slopes and intercepts for the two ADAS approaches that utilized in vitro digestibility of organic matter and either measured gross energy (GE), or a prediction of GE from component assays, and one UC Davis approach, based upon in vitro gas production and some component assays, differed from both unity and zero, respectively, while this was not the case for the two NRC and one UC Davis approach. However, within these latter three approaches, the goodness of fit (r(2)) increased from the NRC approach utilizing lignin (0.61) to the NRC approach utilizing 48 h in vitro digestion of neutral detergent fibre (NDF:0.72) and to the UC Davis approach utilizing a 30 h in vitro digestion of NDF (0.84). The reason for the difference between the precision of the NRC procedures was the failure of assayed lignin values to accurately predict 48 h in vitro digestion of NDF. However, differences among the six predictive approaches in the number of supporting assays, and their costs, as well as that the NRC approach is actually three related equations requiring categorical description of feeds (making them unsuitable for mixed feeds) while the ADAS and UC Davis approaches are single equations, suggests that the procedure of choice will vary dependent Upon local conditions, specific objectives and the feedstuffs to be evaluated. In contrast to the evaluation of the procedures among feedstuffs, no procedure was able to consistently discriminate the ME values of individual feeds within feedstuffs determined in vivo, suggesting that the quest for an accurate and precise ME predictive approach among and within feeds, may remain to be identified. (C) 2004 Elsevier B.V. All rights reserved.