28 resultados para LEARNING OBJECTS REPOSITORIES - MODELS
Resumo:
The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only applications must evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time to changing contexts. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.
Resumo:
The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only must applications evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment. How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.
Resumo:
In this paper two models for the simulation of glucose-insulin metabolism of children with Type 1 diabetes are presented. The models are based on the combined use of Compartmental Models (CMs) and artificial Neural Networks (NNs). Data from children with Type 1 diabetes, stored in a database, have been used as input to the models. The data are taken from four children with Type 1 diabetes and contain information about glucose levels taken from continuous glucose monitoring system, insulin intake and food intake, along with corresponding time. The influences of taken insulin on plasma insulin concentration, as well as the effect of food intake on glucose input into the blood from the gut, are estimated from the CMs. The outputs of CMs, along with previous glucose measurements, are fed to a NN, which provides short-term prediction of glucose values. For comparative reasons two different NN architectures have been tested: a Feed-Forward NN (FFNN) trained with the back-propagation algorithm with adaptive learning rate and momentum, and a Recurrent NN (RNN), trained with the Real Time Recurrent Learning (RTRL) algorithm. The results indicate that the best prediction performance can be achieved by the use of RNN.
Resumo:
In studies related to deep geological disposal of radioactive waste, it is current practice to transfer external information (e.g. from other sites, from underground rock laboratories or from natural analogues) to safety cases for specific projects. Transferable information most commonly includes parameters, investigation techniques, process understanding, conceptual models and high-level conclusions on system behaviour. Prior to transfer, the basis of transferability needs to be established. In argillaceous rocks, the most relevant common feature is the microstructure of the rocks, essentially determined by the properties of clay–minerals. Examples are shown from the Swiss and French programmes how transfer of information was handled and justified. These examples illustrate how transferability depends on the stage of development of a repository safety case and highlight the need for adequate system understanding at all sites involved to support the transfer.
Resumo:
The diversity of European culture is reflected in its healthcare training programs. In intensive care medicine (ICM), the differences in national training programs were so marked that it was unlikely that they could produce specialists of equivalent skills. The Competency-Based Training in Intensive Care Medicine in Europe (CoBaTrICE) program was established in 2003 as a Europe-based worldwide collaboration of national training organizations to create core competencies for ICM using consensus methodologies to establish common ground. The group's professional and research ethos created a social identity that facilitated change. The program was easily adaptable to different training structures and incorporated the voice of patients and relatives. The CoBaTrICE program has now been adopted by 15 European countries, with another 12 countries planning to adopt the training program, and is currently available in nine languages, including English. ICM is now recognized as a primary specialty in Spain, Switzerland, and the UK. There are still wide variations in structures and processes of training in ICM across Europe, although there has been agreement on a set of common program standards. The combination of a common "product specification" for an intensivist, combined with persisting variation in the educational context in which competencies are delivered, provides a rich source of research inquiry. Pedagogic research in ICM could usefully focus on the interplay between educational interventions, healthcare systems and delivery, and patient outcomes, such as including whether competency-based program are associated with lower error rates, whether communication skills training is associated with greater patient and family satisfaction, how multisource feedback might best be used to improve reflective learning and teamworking, or whether increasing the proportion of specialists trained in acute care in the hospital at weekends results in better patient outcomes.
Resumo:
OBJECTIVES The generation of learning goals (LGs) that are aligned with learning needs (LNs) is one of the main purposes of formative workplace-based assessment. In this study, we aimed to analyse how often trainer–student pairs identified corresponding LNs in mini-clinical evaluation exercise (mini-CEX) encounters and to what degree these LNs aligned with recorded LGs, taking into account the social environment (e.g. clinic size) in which the mini-CEX was conducted. METHODS Retrospective analyses of adapted mini-CEX forms (trainers’ and students’ assessments) completed by all Year 4 medical students during clerkships were performed. Learning needs were defined by the lowest score(s) assigned to one or more of the mini-CEX domains. Learning goals were categorised qualitatively according to their correspondence with the six mini-CEX domains (e.g. history taking, professionalism). Following descriptive analyses of LNs and LGs, multi-level logistic regression models were used to predict LGs by identified LNs and social context variables. RESULTS A total of 512 trainers and 165 students conducted 1783 mini-CEXs (98% completion rate). Concordantly, trainer–student pairs most often identified LNs in the domains of ‘clinical reasoning’ (23% of 1167 complete forms), ‘organisation/efficiency’ (20%) and ‘physical examination’ (20%). At least one ‘defined’ LG was noted on 313 student forms (18% of 1710). Of the 446 LGs noted in total, the most frequently noted were ‘physical examination’ (49%) and ‘history taking’ (21%). Corresponding LNs as well as social context factors (e.g. clinic size) were found to be predictors of these LGs. CONCLUSIONS Although trainer–student pairs often agreed in the LNs they identified, many assessments did not result in aligned LGs. The sparseness of LGs, their dependency on social context and their partial non-alignment with students’ LNs raise questions about how the full potential of the mini-CEX as not only a ‘diagnostic’ but also an ‘educational’ tool can be exploited.
Resumo:
Background: Defining learning goals (LG) in alignment with learning needs (LN) is one of the key purposes of formative workplace-based assessment, but studies about this topic are scarce. Summary of Work: We analysed quantitatively and qualitatively how often trainer-student pairs identified the same LN during Mini Clinical Evaluation Exercises (Mini-CEX) in clerkships and to what degree those LNs were in line with the recorded LGs. Multilevel logistic regression models were used to predict LGs by identified LNs, controlling for context variables. Summary of Results: 512 trainers and 165 students conducted 1783 Mini-CEX (98% completion rate). Concordantly, trainer-student pairs most often identified LNs in the domains ‘clinical reasoning’ (23% of 1167 complete forms), ‘organisation / efficiency’ (20%) and ‘physical examination’ (20%). At least one ‘defined’ LG was noted on 313 student forms (18% of 1710), with a total of 446 LGs. Of these, the most frequent LGs were ‘physical examination’ (49% of 446 LGs) and ‘history taking’ (21%); corresponding LNs as well as context variables (e.g. clinic size) were found to be predictors of these LGs. Discussion and Conclusions: Although trainer-student pairs often agreed in their identified LNs, many assessments did not result in an aligned LG or a LG at all. Interventions are needed to enhance the proportion of (aligned) LGs in Mini-CEX in order to tap into its full potential not only as a ‘diagnostic’ but also as an ‘educational tool’. Take-home messages: The sparseness of LGs, their dependency on context variables and their partial non-alignment with students’ LNs raise the question of how the effectiveness of Mini-CEX can be further enhanced.
Resumo:
In this paper we present the results from the coverage and the orbit determination accuracy simulations performed within the recently completed ESA study “Assessment Study for Space Based Space Surveillance (SBSS) Demonstration System” (Airbus Defence and Space consortium). This study consisted in investigating the capability of a space based optical sensor (SBSS) orbiting in low Earth orbit (LEO) to detect and track objects in GEO (geosynchronous orbit), MEO (medium Earth orbit) and LEO and to determinate and improve initial orbits from such observations. Space based systems may achieve better observation conditions than ground based sensors in terms of astrometric accuracy, detection coverage, and timeliness. The primary observation mode of the proposed SBSS demonstrator is GEO surveillance, i.e. the systematic search and detection of unknown and known objects. GEO orbits are specific and unique orbits from dynamical point of view. A space-based sensor may scan the whole GEO ring within one sidereal day if the orbit and pointing directions are chosen properly. For an efficient survey, our goal was to develop a leak-proof GEO fence strategy. Collaterally, we show that also MEO, LEO and other (GTO,Molniya, etc.) objects would be possible to observe by the system and for a considerable number of LEO objects to down to size of 1 cm we can obtain meaningful statistical data for improvement and validation of space debris environment models
Resumo:
While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence.
Resumo:
Patient-specific biomechanical models including local bone mineral density and anisotropy have gained importance for assessing musculoskeletal disorders. However the trabecular bone anisotropy captured by high-resolution imaging is only available at the peripheral skeleton in clinical practice. In this work, we propose a supervised learning approach to predict trabecular bone anisotropy that builds on a novel set of pose invariant feature descriptors. The statistical relationship between trabecular bone anisotropy and feature descriptors were learned from a database of pairs of high resolution QCT and clinical QCT reconstructions. On a set of leave-one-out experiments, we compared the accuracy of the proposed approach to previous ones, and report a mean prediction error of 6% for the tensor norm, 6% for the degree of anisotropy and 19◦ for the principal tensor direction. These findings show the potential of the proposed approach to predict trabecular bone anisotropy from clinically available QCT images.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.