888 resultados para Fuzzy Domain Ontology, Fuzzy Subsumption, Granular Computing, Granular IR Systems, Information Retrieval
Resumo:
It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.
Resumo:
Currently more than half of Electronic Health Record (EHR) projects fail. Most of these failures are not due to flawed technology, but rather due to the lack of systematic considerations of human issues. Among the barriers for EHR adoption, function mismatching among users, activities, and systems is a major area that has not been systematically addressed from a human-centered perspective. A theoretical framework called Functional Framework was developed for identifying and reducing functional discrepancies among users, activities, and systems. The Functional Framework is composed of three models – the User Model, the Designer Model, and the Activity Model. The User Model was developed by conducting a survey (N = 32) that identified the functions needed and desired from the user’s perspective. The Designer Model was developed by conducting a systemic review of an Electronic Dental Record (EDR) and its functions. The Activity Model was developed using an ethnographic method called shadowing where EDR users (5 dentists, 5 dental assistants, 5 administrative personnel) were followed quietly and observed for their activities. These three models were combined to form a unified model. From the unified model the work domain ontology was developed by asking users to rate the functions (a total of 190 functions) in the unified model along the dimensions of frequency and criticality in a survey. The functional discrepancies, as indicated by the regions of the Venn diagrams formed by the three models, were consistent with the survey results, especially with user satisfaction. The survey for the Functional Framework indicated the preference of one system over the other (R=0.895). The results of this project showed that the Functional Framework provides a systematic method for identifying, evaluating, and reducing functional discrepancies among users, systems, and activities. Limitations and generalizability of the Functional Framework were discussed.
Resumo:
Fibrillin-1 and -2 are large secreted glycoproteins that are known to be components of extracellular matrix microfibrils located in the vasculature, basement membrane and various connective tissues. These microfibrils are often associated with a superstructure known as the elastic fiber. During the development of elastic tissues, fibrillin microfibrils precede the appearance of elastin and may provide a scaffolding for the deposition and crosslinking of elastin. Using RT/PCR, we cloned and sequenced 3.85Kbp of the FBN2 gene. Five differences were found between our contig sequence and that published by Zhang et al. (1995). Like many extracellular matrix proteins, the fibrillins are modular proteins. We compared analogous domains of the two fibrillins and also members of the latent TGF-$\beta$ binding protein (LTBP) family to determine their phylogenetic relationship. We found that the two families are homologous. LTBP-2 is the most similar to the fibrillin family while FBN-1 is the most similar to the LTBP family. The fibrillin-1 carboxy terminal domain is proteolytically processed. Two eukaryotic protein expression systems, baculoviral and CHO-K1, were developed to examine the proteolytic processing of the carboxy terminal domain of the fibrillin-1 protein. Both expression systems successfully processed the domain and both processed a mutant less efficiently. In the CHO-K1 cells, processing occurred intracellularly. ^
Resumo:
We present the data structures and algorithms used in the approach for building domain ontologies from folksonomies and linked data. In this approach we extracts domain terms from folksonomies and enrich them with semantic information from the Linked Open Data cloud. As a result, we obtain a domain ontology that combines the emergent knowledge of social tagging systems with formal knowledge from Ontologies.
Resumo:
The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Currently, there is a great deal of well-founded explicit knowledge formalizing general notions, such as time concepts and the part_of relation. Yet, it is often the case that instead of reusing ontologies that implement such notions (the so-called general ontologies), engineers create procedural programs that implicitly implement this knowledge. They do not save time and code by reusing explicit knowledge, and devote effort to solve problems that other people have already adequately solved. Consequently, we have developed a methodology that helps engineers to: (a) identify the type of general ontology to be reused; (b) find out which axioms and definitions should be reused; (c) make a decision, using formal concept analysis, on what general ontology is going to be reused; and (d) adapt and integrate the selected general ontology in the domain ontology to be developed. To illustrate our approach we have employed use-cases. For each use case, we provide a set of heuristics with examples. Each of these heuristics has been tested in either OWL or Prolog. Our methodology has been applied to develop a pharmaceutical product ontology. Additionally, we have carried out a controlled experiment with graduated students doing a MCs in Artificial Intelligence. This experiment has yielded some interesting findings concerning what kind of features the future extensions of the methodology should have.
Resumo:
La tesis que se presenta tiene como propósito la construcción automática de ontologías a partir de textos, enmarcándose en el área denominada Ontology Learning. Esta disciplina tiene como objetivo automatizar la elaboración de modelos de dominio a partir de fuentes información estructurada o no estructurada, y tuvo su origen con el comienzo del milenio, a raíz del crecimiento exponencial del volumen de información accesible en Internet. Debido a que la mayoría de información se presenta en la web en forma de texto, el aprendizaje automático de ontologías se ha centrado en el análisis de este tipo de fuente, nutriéndose a lo largo de los años de técnicas muy diversas provenientes de áreas como la Recuperación de Información, Extracción de Información, Sumarización y, en general, de áreas relacionadas con el procesamiento del lenguaje natural. La principal contribución de esta tesis consiste en que, a diferencia de la mayoría de las técnicas actuales, el método que se propone no analiza la estructura sintáctica superficial del lenguaje, sino que estudia su nivel semántico profundo. Su objetivo, por tanto, es tratar de deducir el modelo del dominio a partir de la forma con la que se articulan los significados de las oraciones en lenguaje natural. Debido a que el nivel semántico profundo es independiente de la lengua, el método permitirá operar en escenarios multilingües, en los que es necesario combinar información proveniente de textos en diferentes idiomas. Para acceder a este nivel del lenguaje, el método utiliza el modelo de las interlinguas. Estos formalismos, provenientes del área de la traducción automática, permiten representar el significado de las oraciones de forma independiente de la lengua. Se utilizará en concreto UNL (Universal Networking Language), considerado como la única interlingua de propósito general que está normalizada. La aproximación utilizada en esta tesis supone la continuación de trabajos previos realizados tanto por su autor como por el equipo de investigación del que forma parte, en los que se estudió cómo utilizar el modelo de las interlinguas en las áreas de extracción y recuperación de información multilingüe. Básicamente, el procedimiento definido en el método trata de identificar, en la representación UNL de los textos, ciertas regularidades que permiten deducir las piezas de la ontología del dominio. Debido a que UNL es un formalismo basado en redes semánticas, estas regularidades se presentan en forma de grafos, generalizándose en estructuras denominadas patrones lingüísticos. Por otra parte, UNL aún conserva ciertos mecanismos de cohesión del discurso procedentes de los lenguajes naturales, como el fenómeno de la anáfora. Con el fin de aumentar la efectividad en la comprensión de las expresiones, el método provee, como otra contribución relevante, la definición de un algoritmo para la resolución de la anáfora pronominal circunscrita al modelo de la interlingua, limitada al caso de pronombres personales de tercera persona cuando su antecedente es un nombre propio. El método propuesto se sustenta en la definición de un marco formal, que ha debido elaborarse adaptando ciertas definiciones provenientes de la teoría de grafos e incorporando otras nuevas, con el objetivo de ubicar las nociones de expresión UNL, patrón lingüístico y las operaciones de encaje de patrones, que son la base de los procesos del método. Tanto el marco formal como todos los procesos que define el método se han implementado con el fin de realizar la experimentación, aplicándose sobre un artículo de la colección EOLSS “Encyclopedia of Life Support Systems” de la UNESCO. ABSTRACT The purpose of this thesis is the automatic construction of ontologies from texts. This thesis is set within the area of Ontology Learning. This discipline aims to automatize domain models from structured or unstructured information sources, and had its origin with the beginning of the millennium, as a result of the exponential growth in the volume of information accessible on the Internet. Since most information is presented on the web in the form of text, the automatic ontology learning is focused on the analysis of this type of source, nourished over the years by very different techniques from areas such as Information Retrieval, Information Extraction, Summarization and, in general, by areas related to natural language processing. The main contribution of this thesis consists of, in contrast with the majority of current techniques, the fact that the method proposed does not analyze the syntactic surface structure of the language, but explores his deep semantic level. Its objective, therefore, is trying to infer the domain model from the way the meanings of the sentences are articulated in natural language. Since the deep semantic level does not depend on the language, the method will allow to operate in multilingual scenarios, where it is necessary to combine information from texts in different languages. To access to this level of the language, the method uses the interlingua model. These formalisms, coming from the area of machine translation, allow to represent the meaning of the sentences independently of the language. In this particular case, UNL (Universal Networking Language) will be used, which considered to be the only interlingua of general purpose that is standardized. The approach used in this thesis corresponds to the continuation of previous works carried out both by the author of this thesis and by the research group of which he is part, in which it is studied how to use the interlingua model in the areas of multilingual information extraction and retrieval. Basically, the procedure defined in the method tries to identify certain regularities at the UNL representation of texts that allow the deduction of the parts of the ontology of the domain. Since UNL is a formalism based on semantic networks, these regularities are presented in the form of graphs, generalizing in structures called linguistic patterns. On the other hand, UNL still preserves certain mechanisms of discourse cohesion from natural languages, such as the phenomenon of the anaphora. In order to increase the effectiveness in the understanding of expressions, the method provides, as another significant contribution, the definition of an algorithm for the resolution of pronominal anaphora limited to the model of the interlingua, in the case of third person personal pronouns when its antecedent is a proper noun. The proposed method is based on the definition of a formal framework, adapting some definitions from Graph Theory and incorporating new ones, in order to locate the notions of UNL expression and linguistic pattern, as well as the operations of pattern matching, which are the basis of the method processes. Both the formal framework and all the processes that define the method have been implemented in order to carry out the experimentation, applying on an article of the "Encyclopedia of Life Support Systems" of the UNESCO-EOLSS collection.
Resumo:
La premisa inicial de la tesis examina cómo las secuelas de Segunda Guerra mundial motivaron una revisión general de la Ciencia y procuraron una nueva relación entre el hombre y su entorno. Matemáticas, Física y Biología gestaron las Ciencias de la Computación como disciplina de convergencia. En un momento de re-definición del objeto científico, una serie de arquitectos vislumbraron la oportunidad para transformar ciertas convenciones disciplinares. Mediante la incorporación de ontologías y procedimientos de cibernética y computación, trazaron un nuevo espacio arquitectónico. Legitimados por un despegue tecnológico incuestionable, desafían los límites de la profesión explorando campos abiertos a nuevos programas y acciones; amplían el dominio natural de la Arquitectura más allá del objeto(terminado) hacia el proceso(abierto). Se da inicio a la tesis describiendo los antecedentes que conducen a ese escenario de cambio. Se anotan aspectos de Teoría de Sistemas, Computación, Biología y de ciertos referentes de Arquitectura con relevancia para esa nuevo planteamiento. En esos antecedentes residen los argumentos para orientar la disciplina hacia el trabajo con procesos. La linea argumental central del texto aborda la obra de Christopher Alexander, Nicholas Negroponte y Cedric Price a través de una producción teórica y práctica transformada por la computación, y examina la contribución conceptual de cada autor. El análisis comparado de sus modelos se dispone mediante la disección de tres conceptos convergentes: Sistema, Código y Proceso. La discusión crítica se articula por una triangulación entre los autores, donde se identifican comparando por pares las coincidencias y controversias entre ellos. Sirve este procedimiento al propósito de tender un puente conceptual con el escenario arquitectónico actual estimando el impacto de sus propuestas. Se valora su contribución en la deriva del programa cerrado a la especulación , de lo formal a lo informal, de lo único a lo múltiple; del estudio de arquitectura al laboratorio de investigación. Para guiar ese recorrido por la significación de cada autor en el desarrollo digital de la disciplina, se incorporan a la escena dos predicados esenciales; expertos en computación que trabajaron de enlace entre los autores, matizando el significado de sus modelos. El trabajo de Gordon Pask y John Frazer constituye el vehículo de transmisión de los hallazgos de aquellos años, prolonga los caminos iniciados entonces, en la arquitectura de hoy y la que ya se está diseñando para mañana. ABSTRACT The initial premise of the thesis examines how the aftermath of second world war motivated a general revision of science and procure the basis of a new relation between mankind and its environment. Mathematics, Physics, and Biology gave birth to the Computer Sciences as a blend of different knowledge and procedures. In a time when the object of major sciences was being redefined, a few architects saw a promising opportunity for transforming the Architectural convention. By implementing the concepts, ontology and procedures of Cybernetics, Artificial Intelligence and Information Technology, they envisioned a new space for their discipline. In the verge of transgression three prescient architects proposed complete architectural systems through their writings and projects; New systems that challenged the profession exploring open fields through program and action, questioning the culture of conservatism; They shifted architectural endeavor from object to process. The thesis starts describing the scientific and architectural background that lead to that opportunity, annotating aspects of Systems Theory, Computing, Biology and previous Architecture form the process perspective. It then focuses on the Works of Christopher Alexander, Nicholas Negroponte and Cedric Price through their work, and examines each authors conceptual contribution. It proceeds to a critical analysis of their proposals on three key converging aspects: system, architectural encoding and process. Finally, the thesis provides a comparative discussion between the three authors, and unfolds the impact of their work in todays architectural scenario. Their contribution to shift from service to speculation, from formal to informal , from unitary to multiple; from orthodox architecture studio to open laboratories of praxis through research. In order to conclude that triangle of concepts, other contributions come into scene to provide relevant predicates and complete those models. A reference to Gordon Pask and John Frazer is then provided with particular interest in their role as link between those pioneers and todays perspective, pushing the boundaries of both what architecture was and what it could become.
Resumo:
O cenário competitivo e globalizado em que as empresas estão inseridas, sobretudo a partir do século XXI, associados a ciclos de vida cada vez menores dos produtos, rigorosos requisitos de qualidade, além de políticas de preservação do meio ambiente, com redução de consumo energético e de recursos hídricos, somadas às exigências legais de melhores condições de trabalho, resultaram em uma quebra de paradigma nos processos produtivos até então concebidos. Como solução a este novo cenário produtivo pode-se citar o extenso uso da automação industrial, fato que resultou em sistemas cada vez mais complexos, tanto do ponto de vista estrutural, em função do elevado número de componentes, quanto da complexidade dos sistemas de controle. A previsibilidade de todos os estados possíveis do sistema torna-se praticamente impossível. Dentre os estados possíveis pode-se citar os estados de falha que, dependendo da severidade do efeito associado à sua ocorrência, podem resultar em sérios danos para o homem, o meio ambiente e às próprias instalações, caso não sejam corretamente diagnosticados e tratados. Fatos recentes de catástrofes relacionadas à sistemas produtivos revelam a necessidade de se implementar medidas para prevenir e para mitigar os efeitos da ocorrência de falhas, com o objetivo de se evitar a ocorrência de catástrofes. De acordo com especialistas, os Sistemas Instrumentados de Segurança SIS, referenciados em normas como a IEC 61508 e IEC 61511, são uma solução para este tipo de problema. Trabalhos publicados tratam de métodos para a implementação de camadas SIS de prevenção, porém com escassez de trabalhos para camadas SIS de mitigação. Em função do desconhecimento da dinâmica do sistema em estado de falha, técnicas tradicionais de modelagem tornam-se inviáveis. Neste caso, o uso de inteligência artificial, como por exemplo a lógica fuzzy, pode se tornar uma solução para o desenvolvimento do algoritmo de controle, associadas a ferramentas de edição, modelagem e geração dos códigos de controle. A proposta deste trabalho é apresentar uma sistemática para a implementação de um sistema de controle para a mitigação de falhas críticas em sistemas produtivos, com referência às normas IEC 61508/61511, com ação antecipativa à ocorrência de catástrofes.
Resumo:
Este trabajo presenta el uso de una ontología en el dominio financiero para la expansión de consultas con el fin de mejorar los resultados de un sistema de recuperación de información (RI) financiera. Este sistema está compuesto por una ontología y un índice de Lucene que permite recuperación de conceptos identificados mediante procesamiento de lenguaje natural. Se ha llevado a cabo una evaluación con un conjunto limitado de consultas y los resultados indican que la ambigüedad sigue siendo un problema al expandir la consulta. En ocasiones, la elección de las entidades adecuadas a la hora de expandir las consultas (filtrando por sector, empresa, etc.) permite resolver esa ambigüedad.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.