881 resultados para knowing-what (pattern recognition) element of knowing-how knowledge
Resumo:
The implantation of the new Architecture Degree and the important normative changes in the building sector imply the need to use new teaching methodologies that enhance skills and competences in order to response to the increasing requirements demanded by society to the future architect. The aim of this paper is to present, analyze and discuss the development of multidisciplinary workshops as a new teaching methodology used in several Construction subjects of the Architecture Degree in the University of Alicante. Workshops conceived with the aim to synthesize and complement the technical knowledge acquired by the students during the Degree and to enhance the skills and competencies necessary for the professional practice. With that purpose, we decided to experiment on current subjects of the degree during this academic year, by applying the requirements defined in the future Architecture Degree in a practical way, through workshops between different subjects, superposing the technical knowledge with the resolution of constructive problems in the development of an architectural project. Developing these workshops between subjects we can dissolve the traditional boundaries between different areas of the Degree. This multidisciplinary workshop methodology allows the use of all the global knowledge acquired by students during their studies and at the same time, it enhances students’ ability to communicate and discuss their ideas and solutions in public. It also increases their capacity of self-criticism, and it foments their ability to undertake learning strategies and research in an autonomous way. The used methodology is based on the development of a practical work common to several subjects of different knowledge areas within the "Technology Block" of the future Architecture Degree. Thus, students work approaching the problem in a global way discussing simultaneously with teachers from different areas. By using these new workshops we stimulate an interactive class versus a traditional lecture. Work is evaluated continuously, valuing the participative pupil´s attitude, working in groups in class time, reaching weekly objectives and stimulating the individual responsibility and positive interdependence of the pupil inside the working group. The exercises are designed to improve students’ ability to transmit their ideas and solutions in public, knowing how to discuss and defend their technical resolutions to peers and teachers (Peer Reviewing), their capacity for self-criticism and their capacity to undertake strategies and autonomous learning processes at the same time they develop a personal research into new technologies, systems and materials. Students have shown their majority preference for this teaching methodology by the multidisciplinary workshops offered in the last years, with very satisfactory academic results. In conclusion, it can be verified nowadays the viability of the introduction of new contents and new teaching methodologies necessary for the acquisition of the skills in the future Architecture Degree, through workshops between several subjects that have had a great acceptance in students and positive contrasted academic results.
Resumo:
Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.
Resumo:
Intercultural competence (IC) as an essential part of conceptualization of the cultural dimension in FLT has been promoted by educationalists as the most preferred type of competence. One of the challenges of incorporating IC into FLT is to move from the recognition of IC as a model of teaching (Byram, Nichols and Stevens, 2001) to the development of practical applications. This can be due to the fact that teachers do not have sufficient knowledge of the theory behind the concept and consequently, have difficulties to implement the curriculum requirements with regards to IC into their teaching. The purpose of this study was to investigate how teachers of English in upper secondary schools in Sweden interpret the concept of IC and, accordingly, what is their view of culture in English language teaching. In order to answer the research question, I used an exploratory investigation by adopting a qualitative research method in form of semi-structured interviews. The results are similar to the previous studies (Lundgren, 2002; Larzén, 2005) and suggest that teachers lack theoretical background and central guidance with regards to IC and do not always integrate language and culture into an intercultural model of the English language pedagogy.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-05
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
Participants in contingent valuation studies may be uncertain about a number of aspects of the policy and survey context. The uncertainty management model of fairness judgments states that individuals will evaluate a policy in terms of its fairness when they do not know whether they can trust the relevant managing authority or experience uncertainty due to insufficient knowledge of the general issues surrounding the environmental policy. Similarly, some researchers have suggested that, not knowing how to answer WTP questions, participants convey their general attitudes toward the public good rather than report well-defined economic preferences. These contentions were investigated in a sample of 840 residents in four urban catchments across Australia who were interviewed about their WTP for stormwater pollution abatement. Four sources of uncertainty were measured: amount of prior issue-related thought, trustworthiness of the water authority, insufficient scenario information, and WTP response uncertainty. A logistic regression model was estimated in each subsample to test the main effects of the uncertainty sources on WTP as well as their interaction with fairness and proenvironmental attitudes. Results indicated support for the uncertainty management model in only one of the four samples. Similarly, proenvironmental attitudes interacted rarely with uncertainty to a significant level, and in ways that were more complex than hypothesised. It was concluded that uncertain individuals were generally not more likely than other participants to draw on either fairness evaluations or proenvironmental attitudes when making decisions about paying for stormwater pollution abatement.
Resumo:
In this paper, we present a new scheme for off-line recognition of multi-font numerals using the Takagi-Sugeno (TS) model. In this scheme, the binary image of a character is partitioned into a fixed number of sub-images called boxes. The features consist of normalized vector distances (gamma) from each box. Each feature extracted from different fonts gives rise to a fuzzy set. However, when we have a small number of fonts as in the case of multi-font numerals, the choice of a proper fuzzification function is crucial. Hence, we have devised a new fuzzification function involving parameters, which take account of the variations in the fuzzy sets. The new fuzzification function is employed in the TS model for the recognition of multi-font numerals.
Resumo:
O que caracteriza uma pesquisa acadêmica num curso de doutorado é a apresentação de dois quesitos fundamentais. O primeiro é o elemento de inovação, capaz de enriquecer a pesquisa sobre o tema proposto. O segundo é o dos apontamentos como possibilidades de promoverem novos caminhos de releituras. Nesse sentido, nos convencemos de que a presente tese atende a expectativa, pois, o elemento inovador desta pesquisa é a desconstrução do conceito de saga da criação proposto por Karl Barth. É novo porque não encontramos, como suspeitávamos, nenhum autor, ou mesmo obra ou pesquisa que tenha proposto esta mesma tarefa. Ao contrário, há até alguns autores que enaltecem a pesquisa realizada por Karl Barth, como é o caso de Coats e Brueggemann. Apesar de reagirem a alguns pontos da teologia de Barth, porém, não o fizeram, especificamente, ao conceito de saga. O segundo quesito, estruturalmente ligado ao primeiro, é o que promove as possibilidades de releituras. A partir do pensamento de Paul Ricoeur propomos uma nova hermenêutica bíblica, fundamentada a partir daquilo que Ricoeur chamou de via longa, que utiliza-se de vários métodos, inclusive o histórico crítico, para se buscar uma interpretação do mundo do texto que gere sentido ao mundo frente ao texto. Acreditamos que esta proposta é capaz de superar a leitura puramente dogmática do mundo do texto. De acordo com Ricoeur, acreditamos que os elementos fundantes que pautaram a hermenêutica em torno do Dasein, ou mesmo, em torno da relação sujeito/objeto podem contribuir para uma nova hermenêutica, desde que não façam as mesmas concessões ao sujeito conhecente. Assim, a possibilidade de uma nova releitura se revela a partir daquilo que Ricoeur definiu, na relação dialética entre mundo do texto e mundo frente ao texto, como representância (réprésentance) revelante e transformante. E é nesse ponto que as Sagradas Escrituras ocupam o posto de fonte de revelação e inspiração.
Resumo:
Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.
Resumo:
A practical Bayesian approach for inference in neural network models has been available for ten years, and yet it is not used frequently in medical applications. In this chapter we show how both regularisation and feature selection can bring significant benefits in diagnostic tasks through two case studies: heart arrhythmia classification based on ECG data and the prognosis of lupus. In the first of these, the number of variables was reduced by two thirds without significantly affecting performance, while in the second, only the Bayesian models had an acceptable accuracy. In both tasks, neural networks outperformed other pattern recognition approaches.
Resumo:
The thesis is concerned with relationships between profit, technology and environmental change. Existing work has concentrated on only a few questions, treated at either micro or macro levels of analysis. And there has been something of an impasse since the neoclassical and neomarxist approaches are either in direct conflict (macro level), or hardly interact (micro level). The aim of the thesis was to bypass this impasse by starting to develop a meso level of analysis that focusses on issues largely ignored in the traditional approaches - on questions about distribution. The first questions looked at were descriptive - what were the patterns of distribution over time of the variability in types and rates of environmental change, and in particular, was there any evidence of periodization? Two case studies were used to examine these issues. The first looked at environmental change in the iron and steel industry since 1700, and the second studied pollution in five industries in the basic processing sector. It was established that environmental change has been markedly periodized, with an apparently fairly regular `cycle length' of about fifty years. The second questions considered were explanatory - whether and how this periodization could be accounted for by reference to variations in aspects of profitability and technical change. In the iron and steel industry, it was found that diffusion rates and the rate of nature of innovation were periodized on the same pattern as was environmental change. And the same sort of variation was also present in the realm of profits, as evidenced by cyclical changes in output growth. Simple theoretical accounts could be given for all the empirically demonstrable links, and it was suggested that the most useful models at this meso level of analysis are provided by structural change models of economic development.
Resumo:
Structural analysis in handwritten mathematical expressions focuses on interpreting the recognized symbols using geometrical information such as relative sizes and positions of the symbols. Most existing approaches rely on hand-crafted grammar rules to identify semantic relationships among the recognized mathematical symbols. They could easily fail when writing errors occurred. Moreover, they assume the availability of the whole mathematical expression before being able to analyze the semantic information of the expression. To tackle these problems, we propose a progressive structural analysis (PSA) approach for dynamic recognition of handwritten mathematical expressions. The proposed PSA approach is able to provide analysis result immediately after each written input symbol. This has an advantage that users are able to detect any recognition errors immediately and correct only the mis-recognized symbols rather than the whole expression. Experiments conducted on 57 most commonly used mathematical expressions have shown that the PSA approach is able to achieve very good performance results.