953 resultados para Multimedia Learning Simulation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

High fidelity simulation as a teaching and learning approach is being embraced by many schools of nursing. Our school embarked on integrating high fidelity (HF) simulation into the undergraduate clinical education program in 2011. Low and medium fidelity simulation has been used for many years, but this did not simplify the integration of HF simulation. Alongside considerations of how and where HF simulation would be integrated, issues arose with: student consent and participation for observed activities; data management of video files; staff development, and conceptualising how methods for student learning could be researched. Simulation for undergraduate student nurses commenced as a formative learning activity, undertaken in groups of eight, where four students undertake the ‘doing’ role and four are structured observers, who then take a formal role in the simulation debrief. Challenges for integrating simulation into student learning included conceptualising and developing scenarios to trigger students’ decision making and application of skills, knowledge and attitudes explicit to solving clinical ‘problems’. Developing and planning scenarios for students to ‘try out’ skills and make decisions for problem solving lay beyond choosing pre-existing scenarios inbuilt with the software. The supplied scenarios were not concept based but rather knowledge, skills and technology (of the manikin) focussed. Challenges lay in using the technology for the purpose of building conceptual mastery rather than using technology simply because it was available. As we integrated use of HF simulation into the final year of the program, focus was on building skills, knowledge and attitudes that went beyond technical skill, and provided an opportunity to bridge the gap with theory-based knowledge that students often found difficult to link to clinical reality. We wished to provide opportunities to develop experiential knowledge based on application and clinical reasoning processes in team environments where problems are encountered, and to solve them, the nurse must show leadership and direction. Other challenges included students consenting for simulations to be videotaped and ethical considerations of this. For example if one student in a group of eight did not consent, did this mean they missed the opportunity to undertake simulation, or that others in the group may be disadvantaged by being unable to review their performance. This has implications for freely given consent but also for equity of access to learning opportunities for students who wished to be taped and those who did not. Alongside this issue were the details behind data management, storage and access. Developing staff with varying levels of computer skills to use software and undertake a different approach to being the ‘teacher’ required innovation where we took an experiential approach. Considering explicit learning approaches to be trialled for learning was not a difficult proposition, but considering how to enact this as research with issues of blinding, timetabling of blinded groups, and reducing bias for testing results of different learning approaches along with gaining ethical approval was problematic. This presentation presents examples of these challenges and how we overcame them.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AIMS This paper reports on the implementation of a research project that trials an educational strategy implemented over six months of an undergraduate third year nursing curriculum. This project aims to explore the effectiveness of ‘think aloud’ as a strategy for learning clinical reasoning for students in simulated clinical settings. BACKGROUND Nurses are required to apply and utilise critical thinking skills to enable clinical reasoning and problem solving in the clinical setting [1]. Nursing students are expected to develop and display clinical reasoning skills in practice, but may struggle articulating reasons behind decisions about patient care. For students learning to manage complex clinical situations, teaching approaches are required that make these instinctive cognitive processes explicit and clear [2-5]. In line with professional expectations, nursing students in third year at Queensland University of Technology (QUT) are expected to display clinical reasoning skills in practice. This can be a complex proposition for students in practice situations, particularly as the degree of uncertainty or decision complexity increases [6-7]. The ‘think aloud’ approach is an innovative learning/teaching method which can create an environment suitable for developing clinical reasoning skills in students [4, 8]. This project aims to use the ‘think aloud’ strategy within a simulation context to provide a safe learning environment in which third year students are assisted to uncover cognitive approaches that best assist them to make effective patient care decisions, and improve their confidence, clinical reasoning and active critical reflection on their practice. MEHODS In semester 2 2011 at QUT, third year nursing students will undertake high fidelity simulation, some for the first time commencing in September of 2011. There will be two cohorts for strategy implementation (group 1= use think aloud as a strategy within the simulation, group 2= not given a specific strategy outside of nursing assessment frameworks) in relation to problem solving patient needs. Students will be briefed about the scenario, given a nursing handover, placed into a simulation group and an observer group, and the facilitator/teacher will run the simulation from a control room, and not have contact (as a ‘teacher’) with students during the simulation. Then debriefing will occur as a whole group outside of the simulation room where the session can be reviewed on screen. The think aloud strategy will be described to students in their pre-simulation briefing and allow for clarification of this strategy at this time. All other aspects of the simulations remain the same, (resources, suggested nursing assessment frameworks, simulation session duration, size of simulation teams, preparatory materials). RESULTS Methodology of the project and the challenges of implementation will be the focus of this presentation. This will include ethical considerations in designing the project, recruitment of students and implementation of a voluntary research project within a busy educational curriculum which in third year targets 669 students over two campuses. CONCLUSIONS In an environment of increasingly constrained clinical placement opportunities, exploration of alternate strategies to improve critical thinking skills and develop clinical reasoning and problem solving for nursing students is imperative in preparing nurses to respond to changing patient needs. References 1. Lasater, K., High-fidelity simulation and the development of clinical judgement: students' experiences. Journal of Nursing Education, 2007. 46(6): p. 269-276. 2. Lapkin, S., et al., Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clinical Simulation in Nursing, 2010. 6(6): p. e207-22. 3. Kaddoura, M.P.C.M.S.N.R.N., New Graduate Nurses' Perceptions of the Effects of Clinical Simulation on Their Critical Thinking, Learning, and Confidence. The Journal of Continuing Education in Nursing, 2010. 41(11): p. 506. 4. Banning, M., The think aloud approach as an educational tool to develop and assess clinical reasoning in undergraduate students. Nurse Education Today, 2008. 28: p. 8-14. 5. Porter-O'Grady, T., Profound change:21st century nursing. Nursing Outlook, 2001. 49(4): p. 182-186. 6. Andersson, A.K., M. Omberg, and M. Svedlund, Triage in the emergency department-a qualitative study of the factors which nurses consider when making decisions. Nursing in Critical Care, 2006. 11(3): p. 136-145. 7. O'Neill, E.S., N.M. Dluhy, and C. Chin, Modelling novice clinical reasoning for a computerized decision support system. Journal of Advanced Nursing, 2005. 49(1): p. 68-77. 8. Lee, J.E. and N. Ryan-Wenger, The "Think Aloud" seminar for teaching clinical reasoning: a case study of a child with pharyngitis. J Pediatr Health Care, 1997. 11(3): p. 101-10.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the final report of an Australian Learning and Teaching Council Teaching Fellowship which addressed the needs of two separate groups of learners: (1) final year law students studying ethics and (2) law academics and other interested educators in higher education wishing to use information and communication technologies (ICT) to create engaging learning environments for their students but lacking the capacity to do so. The Fellowship resulted in final year law students being infused with an improved appreciation of ethical practice than they receive from traditional lecture/tutorial means by the development of an integrated program of blended learning including an online program entitled "Entry into Valhalla". This "ethics capstone‟ utilises multimedia produced using cost effective resources (including the "Second Life" virtual environment) to create engaging, contextualised learning experiences. The Fellowship also constructed the knowledge of producing cost-effective multimedia projects in other law academics and other educators in higher education by staff development activities comprising workshops, conference presentations and an interactive website using the "Entry into Valhalla" program as a case study exemplar.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Process models are used to convey semantics about business operations that are to be supported by an information system. A wide variety of professionals is targeted to use such models, including people who have little modeling or domain expertise. We identify important user characteristics that influence the comprehension of process models. Through a free simulation experiment, we provide evidence that selected cognitive abilities, learning style, and learning strategy influence the development of process model comprehension. These insights draw attention to the importance of research that views process model comprehension as an emergent learning process rather than as an attribute of the models as objects. Based on our findings, we identify a set of organizational intervention strategies that can lead to more successful process modeling workshops.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A guide to utilising multi-media for teaching and learning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we report findings of the first phase of an investigation, which explored the experience of learning amongst high-level managers, project leaders and visitors in QUT’s “Cube”. “The Cube” is a giant, interactive, multi-media display; an award-winning configuration that hosts several interactive projects. The research team worked with three groups of participants to understand the relationship between a) the learning experiences that were intended in the establishment phase; b) the learning experiences that were enacted through the design and implementation of specific projects; and c) the lived experiences of learning of visitors interacting with the system. We adopted phenomenography as a research approach, to understand variation in people’s understandings and lived experiences of learning in this environment. The project was conducted within the first twelve months of The Cube being open to visitors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.