967 resultados para Learning Choices
Resumo:
Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.
Resumo:
It is often necessary to selectively attend to important information, at the expense of less important information, especially if you know you cannot remember large amounts of information. The present study examined how younger and older adults select valuable information to study, when given unrestricted choices about how to allocate study time. Participants were shown a display of point values ranging from 1–30. Participants could choose which values to study, and the associated word was then shown. Study time, and the choice to restudy words, was under the participant's control during the 2-minute study session. Overall, both age groups selected high value words to study and studied these more than the lower value words. However, older adults allocated a disproportionately greater amount of study time to the higher-value words, and age-differences in recall were reduced or eliminated for the highest value words. In addition, older adults capitalized on recency effects in a strategic manner, by studying high-value items often but also immediately before the test. A multilevel mediation analysis indicated that participants strategically remembered items with higher point value, and older adults showed similar or even stronger strategic process that may help to compensate for poorer memory. These results demonstrate efficient (and different) metacognitive control operations in younger and older adults, which can allow for strategic regulation of study choices and allocation of study time when remembering important information. The findings are interpreted in terms of life span models of agenda-based regulation and discussed in terms of practical applications. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract)
Resumo:
Research shows that people with diabetes want their lives to proceed as normally as possible, but some patients experience difficulty in reaching their desired goals with treatment. The learning process is a complex phenomenon interwoven into every facet of life. Patients and healthcare providers often have different perspectives in care which gives different expectations on what the patients need to learn and cope with. The aim of this study, therefore, is to describe the experience of learning to live with diabetes. Interviews were conducted with 12 patients afflicted with type 1 or type 2 diabetes. The interviews were then analysed with reference to the reflective lifeworld research approach. The analysis shows that when the afflicted realize that their bodies undergo changes and that blood sugar levels are not always balanced as earlier in life, they can adjust to their new conditions early. The afflicted must take responsibility for balancing their blood sugar levels and incorporating the illness into their lives. Achieving such goals necessitates knowledge. The search for knowledge and sensitivity to changes are constant requirements for people with diabetes. Learning is driven by the tension caused by the need for and dependence on safe blood sugar control, the fear of losing such control, and the fear of future complications. The most important responsibilities for these patients are aspiring to understand their bodies as lived bodies, ensuring safety and security, and acquiring the knowledge essential to making conscious choices.
Resumo:
Time-place learning based on food association was investigated in eight food-restricted Nile tilapias. Each fish was individually housed for 10 days in an experimental tank for adjustments to laboratory conditions, and fed daily in excess. Feeding was then interrupted for 17 days. Training was then started, based on a food-restricted regime in a tank divided into three interconnected compartments. Daily food was offered in one compartment (left or right side) of the tank in the morning and on the opposite side in the afternoon, for a continuous 30-day period. Frequency of choices on the right side was measured on days 10, 20 and 30 (during these test days, fish were not fed). Following this 30-day conditioning period, the Nile tilapias were able to switch sides at the correct period of the day to get food, suggesting that food restriction facilitates time-place learning discrimination. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Time-place learning based on food association was investigated in the fish Nile tilapia. During a 30-day period, food was placed at one side of the aquarium (containing three compartments) in the morning and at the opposite side in the afternoon. Learning was inferred by the number of correct side choices of all fish in each day of test (15th, 30th). During the test day, fish were not fed. The Nile tilapia did not learn to switch sides at the correct day period in order to get food, suggesting thus that this species does not have time-place learning ability.
Resumo:
Time-place learning based on food association was investigated in the cichlids angelfish (Pterophyllum scalare) and pearl cichlid (Geophagus brasiliensis) reared in isolation, therefore eliminating social influence on foraging. During a 30-day period, food was placed in one side of the aquarium (containing three compartments) in the morning and in the opposite side in the afternoon. Learning was inferred by the number of correct side choices of all fish in each day of test (15th and 30th). During the test day fish were not fed. The angelfish learned to switch sides at the correct day period in order to get food, suggesting this species has time-place learning ability when individually reared. on the other hand, the same was not observed for pearl cichlid. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
One important metaphor, referred to biological theories, used to investigate on organizational and business strategy issues is the metaphor about heredity; an area requiring further investigation is the extent to which the characteristics of blueprints inherited from the parent, helps in explaining subsequent development of the spawned ventures. In order to shed a light on the tension between inherited patterns and the new trajectory that may characterize spawned ventures’ development we propose a model aimed at investigating which blueprints elements might exert an effect on business model design choices and to which extent their persistence (or abandonment) determines subsequent business model innovation. Under the assumption that academic and corporate institutions transmit different genes to their spin-offs, we hence expect to have heterogeneity in elements that affect business model design choices and its subsequent evolution. This is the reason why we carry on a twofold analysis in the biotech (meta)industry: under a multiple-case research design, business model and especially its fundamental design elements and themes scholars individuated to decompose the construct, have been thoroughly analysed. Our purpose is to isolate the dimensions of business model that may have been the object of legacy and the ones along which an experimentation and learning process is more likely to happen, bearing in mind that differences between academic and corporate might not be that evident as expected, especially considering that business model innovation may occur.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
There has been a greater emphasis over the past few years of encouraging high school students to take up engineering as a career. This is due to a greater need for engineers in society, particularly in areas that are suffering a skills shortage. Both the engineering profession and universities across Australia have moved to address this shortage, with a proliferation of engineering outreach activities and programs the result. The Engineering Link Group (TELG) began the Engineering Link Project (ELP) over a decade ago with a focus on helping motivated high school students make an informed choice about engineering as a career. It also aimed at encouraging more high school students to study maths and science at high school. From the start the ELP was designed so that the students became engineers, rather than just hear from or watch engineers. Real working engineers pose problems to groups of students for them solve over the course of a day. In this way, students experience what it is like to be an engineer. It has been found that the project does help high school students make more informed career choices about engineering. The project also gave the students real life and practical reasons for studying sciences and mathematics at high school. © 2005, Australasian Association for Engineering Education
Resumo:
This poster outlines the system which the Business School Undergraduate Programme has developed to manage the choice of options by students studying on its programmes. This involves the production of a networked computer package which presents students with the options available to them and leads them through the process of choosing their options on-line. The reasons for developing this system are outlined and the advantages which it has brought to the administration of large numbers of students are discussed.
Resumo:
Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the animal behaviour domain. Our objective was to see how much could be done in a simple and relatively rapid manner using a corpus of journal papers. We used a sequence of pre-existing text processing steps, and here describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a number of hierarchies. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus. Results - Using mainly automated techniques, we were able to construct an 18055 term ontology-like structure with 73% recall of animal behaviour terms, but a precision of only 26%. We were able to clean unwanted terms from the nascent ontology using lexico-syntactic patterns that tested the validity of term inclusion within the ontology. We used the same technique to test for subsumption relationships between the remaining terms to add structure to the initially broad and shallow structure we generated. All outputs are available at http://thirlmere.aston.ac.uk/~kiffer/animalbehaviour/ webcite. Conclusion - We present a systematic method for the initial steps of ontology or structured vocabulary construction for scientific domains that requires limited human effort and can make a contribution both to ontology learning and maintenance. The method is useful both for the exploration of a scientific domain and as a stepping stone towards formally rigourous ontologies. The filtering of recognised terms from a heterogeneous corpus to focus upon those that are the topic of the ontology is identified to be one of the main challenges for research in ontology learning.
Resumo:
Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the Animal Behaviour domain. Our objective was to see how much could be done in a simple and rapid manner using a corpus of journal papers. We used a sequence of text processing steps, and describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a hierarchy. We were able in a very short space of time to construct a 17000 term ontology with a high percentage of suitable terms. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus.
Resumo:
Networked Learning, e-Learning and Technology Enhanced Learning have each been defined in different ways, as people's understanding about technology in education has developed. Yet each could also be considered as a terminology competing for a contested conceptual space. Theoretically this can be a ‘fertile trans-disciplinary ground for represented disciplines to affect and potentially be re-orientated by others’ (Parchoma and Keefer, 2012), as differing perspectives on terminology and subject disciplines yield new understandings. Yet when used in government policy texts to describe connections between humans, learning and technology, terms tend to become fixed in less fertile positions linguistically. A deceptively spacious policy discourse that suggests people are free to make choices conceals an economically-based assumption that implementing new technologies, in themselves, determines learning. Yet it actually narrows choices open to people as one route is repeatedly in the foreground and humans are not visibly involved in it. An impression that the effective use of technology for endless improvement is inevitable cuts off critical social interactions and new knowledge for multiple understandings of technology in people's lives. This paper explores some findings from a corpus-based Critical Discourse Analysis of UK policy for educational technology during the last 15 years, to help to illuminate the choices made. This is important when through political economy, hierarchical or dominant neoliberal logic promotes a single ‘universal model’ of technology in education, without reference to a wider social context (Rustin, 2013). Discourse matters, because it can ‘mould identities’ (Massey, 2013) in narrow, objective economically-based terms which 'colonise discourses of democracy and student-centredness' (Greener and Perriton, 2005:67). This undermines subjective social, political, material and relational (Jones, 2012: 3) contexts for those learning when humans are omitted. Critically confronting these structures is not considered a negative activity. Whilst deterministic discourse for educational technology may leave people unconsciously restricted, I argue that, through a close analysis, it offers a deceptively spacious theoretical tool for debate about the wider social and economic context of educational technology. Methodologically it provides insights about ways technology, language and learning intersect across disciplinary borders (Giroux, 1992), as powerful, mutually constitutive elements, ever-present in networked learning situations. In sharing a replicable approach for linguistic analysis of policy discourse I hope to contribute to visions others have for a broader theoretical underpinning for educational technology, as a developing field of networked knowledge and research (Conole and Oliver, 2002; Andrews, 2011).
Resumo:
Translation training in the university context needs to train students in the processes, in order to enhance and optimise the product as outcome of these processes. Evaluation of a target text as product has often been accused of being a subjective process, which does not easily lend itself to the type of feedback that could enable students to apply criteria more widely. For students, it often seems as though they make different inappropriate or incorrect choices every time they translate a new text, and the learning process appears unpredictable and haphazard. Within functionalist approaches to translation, with their focus on the target text in terms of functional adequacy to the intended purpose, as stipulated in the translation brief, there are guidelines for text production that can help to develop a more systematic approach not only to text production, but also to translation evaluation. In the context of a focus on user knowledge needs, target language conventions and acceptability, the use of corpora is an indispensable tool for the trainee translator. Evaluation can take place against the student's own reasoned selection process, based on hard evidence, against criteria which currently obtain in the TL and the TL culture. When trainee and evaluator work within the same guidelines, there is more scope for constructive learning and feedback.
Resumo:
This study examined the construct validity of the Choices questionnaire that purported to support the theory of Learning Agility. Specifically, Learning Agility attempts to predict an individual's potential performance in new tasks. The construct validity will be measured by examining the convergent/discriminant validity of the Choices Questionnaire against a cognitive ability measure and two personality measures. The Choices Questionnaire did tap a construct that is unique to the cognitive ability and the personality measures, thus suggesting that this measure may have considerable value in personnel selection. This study also examined the relationship of this pew measure to job performance and job promotability. Results of this study found that the Choices Questionnaire predicted job performance and job promotability above and beyond cognitive ability and personality. Data from 107 law enforcement officers, along with two of their co-workers and a supervisor resulted in a correlation of .08 between Learning Agility and cognitive ability. Learning Agility correlated .07 with Learning Goal Orientation and. 17 with Performance Goal Orientation. Correlations with the Big Five Personality factors ranged from −.06 to. 13 with Conscientiousness and Openness to Experience, respectively. Learning Agility correlated .40 with supervisory ratings of job promotability and correlated .3 7 with supervisory ratings of overall job performance. Hierarchical regression analysis found incremental validity for Learning Agility over cognitive ability and the Big Five factors of personality for supervisory ratings of both promotability and overall job performance. A literature review was completed to integrate the Learning Agility construct into a nomological net of personnel selection research. Additionally, practical applications and future research directions are discussed. ^