888 resultados para Knowledge representation (Information theory)
Resumo:
Hallux rígidus (HR) affects the first metatarsophalangeal joint (MTPJ) between 35% and 60% of the population over 65 years and there are multiple ways of treatment. Depending on the radiological stage where you find the deformity determines the procedure to be performed; in the early stages cheilectomy techniques and corrective osteotomy is performed while the more advanced ratings, the surgeon chooses destructive techniques considered as arthrodesis and arthroplasty. This final of degree project aims to focus on 1 MTPJ destructive techniques to clarify which of the procedures generates better results by a number of parameters; outcomes of the American Orthopaedic Foot scale and Ankle Society Hallux metatarsophalangeal Interphalangeal-scale (AOFAS), range of motion (ROM) of the 1ºAMTF, radiological classification. As for the implant arthroplasty technique, this article offers information on material and design that generates better relating to patient characteristics such as age, inflammatory joint diseases, viability and durability of the implant results. The conclusion from this review is that the values obtained in the arthrodesis according AOFAS decrease due to loss of mobility, but both techniques have similar values of effectiveness and concludes with the decision that the technique used is determined taking into account various factors and patient characteristics. Keywords: Hallux rígidus; (Hallux Rígidus) and surgery treatment; Hallux Rígidus arthrodesis; Hallux Rígidus arthroplasty; Hallux Rígidus (arthroplasty and arthrodesis).
Resumo:
A natural way to generalize tensor network variational classes to quantum field systems is via a continuous tensor contraction. This approach is first illustrated for the class of quantum field states known as continuous matrix-product states (cMPS). As a simple example of the path-integral representation we show that the state of a dynamically evolving quantum field admits a natural representation as a cMPS. A completeness argument is also provided that shows that all states in Fock space admit a cMPS representation when the number of variational parameters tends to infinity. Beyond this, we obtain a well-behaved field limit of projected entangled-pair states (PEPS) in two dimensions that provide an abstract class of quantum field states with natural symmetries. We demonstrate how symmetries of the physical field state are encoded within the dynamics of an auxiliary field system of one dimension less. In particular, the imposition of Euclidean symmetries on the physical system requires that the auxiliary system involved in the class' definition must be Lorentz-invariant. The physical field states automatically inherit entropy area laws from the PEPS class, and are fully described by the dissipative dynamics of a lower dimensional virtual field system. Our results lie at the intersection many-body physics, quantum field theory and quantum information theory, and facilitate future exchanges of ideas and insights between these disciplines.
Resumo:
In economics of information theory, credence products are those whose quality is difficult or impossible for consumers to assess, even after they have consumed the product (Darby & Karni, 1973). This dissertation is focused on the content, consumer perception, and power of online reviews for credence services. Economics of information theory has long assumed, without empirical confirmation, that consumers will discount the credibility of claims about credence quality attributes. The same theories predict that because credence services are by definition obscure to the consumer, reviews of credence services are incapable of signaling quality. Our research aims to question these assumptions. In the first essay we examine how the content and structure of online reviews of credence services systematically differ from the content and structure of reviews of experience services and how consumers judge these differences. We have found that online reviews of credence services have either less important or less credible content than reviews of experience services and that consumers do discount the credibility of credence claims. However, while consumers rationally discount the credibility of simple credence claims in a review, more complex argument structure and the inclusion of evidence attenuate this effect. In the second essay we ask, “Can online reviews predict the worst doctors?” We examine the power of online reviews to detect low quality, as measured by state medical board sanctions. We find that online reviews are somewhat predictive of a doctor’s suitability to practice medicine; however, not all the data are useful. Numerical or star ratings provide the strongest quality signal; user-submitted text provides some signal but is subsumed almost completely by ratings. Of the ratings variables in our dataset, we find that punctuality, rather than knowledge, is the strongest predictor of medical board sanctions. These results challenge the definition of credence products, which is a long-standing construct in economics of information theory. Our results also have implications for online review users, review platforms, and for the use of predictive modeling in the context of information systems research.
Resumo:
The theme of teacher education has always been rich in discussion and presents an abundant literature on the subject. Historically this topic has generated concerns in both development bodies and universities / schools where these people learn or are engaged in professional work. Training teachers is complex and these elements of complexity make necessary a review of paradigms of initial and continuing education. Despite the efforts of the past decades, the lack of teachers in some areas of knowledge is still a big concern, and it can become even worse in the future, what reinforces the importance of new decisions and new directions in order to change this situation. Therefore, the university-school relationship is of fundamental importance, linking and articulating theory and school practice, contextualizing knowledge, renewing and adapting curricula to current times and spaces in order to be able to improve and recover the social and professional value of teachers. From this perspective the education public policies should turn to the encouragement and the rescue of values and principles in quality teacher training. In the course comes the Institutional Teaching Initiation Scholarships Program - PIBID as an innovative program of teacher education working and adding essential factors to the university-school to reinforce good teaching practices taking up the role of co-developer schools. This research is aimed at analyzing the factors that PIBID inserts in the university-school relationship within IFPR Campus Palmas. The theoretical route was marked by authors as Edgar Morin (2003, 2010a, 2010b, 2012), Enrique Leff (2002a, 2002b, 2003, 2010), Boaventura Sousa Santos (1988, 2010a, 2010b, 2013) Menga Lüdke (2005, 2013), Demerval Saviani (2000, 2013), Paulo Freire (2011), among others, among them official documents of PIBID were used in this research too. The methodological approach with exploratory approach, descriptive explanatory was of fundamental importance through data collected by the documentary analysis (BRAZIL, 2007, 2009, 2013) and in the focus groups activities (GATTI, 2012). The focus groups interlocutors constituted of three groups: Area Coordinators, supervisors and teaching initiation scholarships. The categories were defined a priori from the Programme's objectives and emerging categories identified from the analysis process. After both documentary and interlocutors analysis, it was possible to identify that PIBID inserts the following factors in the university-school relationship: the Recognition of the Profession; Innovative Program and Dialogues of Knowledge. For the recognition of the profession mainly because it is an initial and continuing education program; it approximates theory and practice; upgrades the role of the teacher at school and motivates methodological innovations. This Innovative Program promotes the role of co-educational school and it also approximates knowledge of the school reality and promotes the continuous training. The third emerging category university-school relationship promotes dialogs of knowledge; bringing together theory and practice; it allows information exchange and opens new perspectives for teacher training. Finally, it is possible to realize that besides being a new program, PIBID has promoted visible changes through the actions carried out by all subprojects in partnerships between universities and schools, restoring and giving new meanings to the pedagogical practices.
Resumo:
The key to graduate professionals with sufficient capacity to meet the research demands of users, should be the vision and the commissioning of schools and their academic library. All these efforts must be linked systematically to ensure the use of data recorded on their knowledge and information units.
Resumo:
Since the child starts in the teaching - learning is made aware that there is a division of natural resources: renewable and nonrenewable. It also says that every natural resource is useless unless it explodes. But to apply such resources necessary knowledge. We realize today that the same biological systems of living things (renewable resources) are transmitters of information. He uses that information to nurture their knowledge, and this is essential for the use and conservation of resources, making valid then the principle that "there is no knowledge without information."
Resumo:
International audience
Resumo:
Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object-initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each.
Resumo:
Part 11: Reference and Conceptual Models
Resumo:
(Deep) neural networks are increasingly being used for various computer vision and pattern recognition tasks due to their strong ability to learn highly discriminative features. However, quantitative analysis of their classication ability and design philosophies are still nebulous. In this work, we use information theory to analyze the concatenated restricted Boltzmann machines (RBMs) and propose a mutual information-based RBM neural networks (MI-RBM). We develop a novel pretraining algorithm to maximize the mutual information between RBMs. Extensive experimental results on various classication tasks show the eectiveness of the proposed approach.
Resumo:
In the last decades, the oil, gas and petrochemical industries have registered a series of huge accidents. Influenced by this context, companies have felt the necessity of engaging themselves in processes to protect the external environment, which can be understood as an ecological concern. In the particular case of the nuclear industry, sustainable education and training, which depend too much on the quality and applicability of the knowledge base, have been considered key points on the safely application of this energy source. As a consequence, this research was motivated by the use of the ontology concept as a tool to improve the knowledge management in a refinery, through the representation of a fuel gas sweetening plant, mixing many pieces of information associated with its normal operation mode. In terms of methodology, this research can be classified as an applied and descriptive research, where many pieces of information were analysed, classified and interpreted to create the ontology of a real plant. The DEA plant modeling was performed according to its process flow diagram, piping and instrumentation diagrams, descriptive documents of its normal operation mode, and the list of all the alarms associated to the instruments, which were complemented by a non-structured interview with a specialist in that plant operation. The ontology was verified by comparing its descriptive diagrams with the original plant documents and discussing with other members of the researchers group. All the concepts applied in this research can be expanded to represent other plants in the same refinery or even in other kind of industry. An ontology can be considered a knowledge base that, because of its formal representation nature, can be applied as one of the elements to develop tools to navigate through the plant, simulate its behavior, diagnose faults, among other possibilities
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
Knowledge-Based Management Systems enable new ways to process and analyse knowledge to gain better insights to solve a problem and aid in decision making. In the police force such systems provide a solution for enhancing operations and improving client administration in terms of knowledge management. The main objectives of every police officer is to ensure the security of life and property, promote lawfulness, and avert and distinguish wrongdoing. The administration of knowledge and information is an essential part of policing, and the police ought to be proactive in directing both explicit and implicit knowledge, whilst adding to their abilities in knowledge sharing. In this paper the potential for a knowledge based system for the Mauritius police was analysed, and recommendations were also made, based on requirements captured from interviews with several long standing officers, and surveying of previous works in the area.
Resumo:
The intersection of Artificial Intelligence and The Law stands for a multifaceted matter, and its effects set the advances on culture, organization, as well as the social matters, when the emergent information technologies are taken into consideration. From this point of view, the weight of formal and informal Conflict Resolution settings should be highlighted, and the use of defective data, information or knowledge must be emphasized. Indeed, it is hard to do it with traditional problem solving methodologies. Therefore, in this work the focus is on the development of decision support systems, in terms of its knowledge representation and reasoning procedures, under a formal framework based on Logic Programming, complemented with an approach to computing centered on Artificial Neural Networks. It is intended to evaluate the Quality-of-Judgments and the respective Degree-of-Confidence that one has on such happenings.
Resumo:
Dyscalculia stands for a brain-based condition that makes it hard to make sense of numbers and mathematical concepts. Some adolescents with dyscalculia cannot grasp basic number concepts. They work hard to learn and memorize basic number facts. They may know what to do in mathematical classes but do not understand why they are doing it. In other words, they miss the logic behind it. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work focuses on the development of an Intelligent System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming framework to Knowledge Representation and Reasoning, complemented with a Case-Based problem solving approach to computing, that allows for the handling of incomplete, unknown, or even contradictory information.