906 resultados para object-oriented programming


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Finding useful sharing information between instances in object- oriented programs has recently been the focus of much research. The applications of such static analysis are multiple: by knowing which variables definitely do not share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly, verify certain correctness aspects of programs even in the absence of annotations. In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways, including supporting multivariance, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances do not share and which ones are definitively null, and which uses the classes to refine the static information when inheritance is present. The use of a set sharing abstraction allows a more precise representation of the existing sharings and is crucial in achieving precision during interprocedural analysis. Carrying the domains in a combined way facilitates the interaction among them in the presence of multivariance in the analysis. We show through examples and experimentally that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Finding useful sharing information between instances in object- oriented programs has been recently the focus of much research. The applications of such static analysis are multiple: by knowing which variables share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly,erify certain correctness aspects of programs even in the absence of annotations In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways. The analysis is multivariant, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances share and which ones do not or are definitively null, and which uses the classes to refine the static information when inheritance is present. Carrying the domains in a combined way facilitates the interaction among the domains in the presence of mutivariance in the analysis. We show that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La cámara Kinect está desarrollada por Prime Sense en colaboración con Microsoft para la consola XBox, ofrece imágenes de profundidad gracias a un sensor infrarrojo. Este dispositivo también incluye una cámara RGB que ofrece imágenes a color además de una serie de micrófonos colocados de tal manera que son capaces de saber de qué ángulo proviene el sonido. En un principio Kinect se creó para el ocio doméstico pero su bajo precio (en comparación con otras cámaras de iguales características) y la aceptación por parte de desarrolladores han explotado sus posibilidades. El objetivo de este proyecto es, partiendo de estos datos, la obtención de variables cinemáticas tales como posición, velocidad y aceleración de determinados puntos de control del cuerpo de un individuo como pueden ser el cabeza, cuello, hombros, codos, muñecas, caderas, rodillas y tobillos a partir de los cuales poder extraer patrones de movimiento. Para ello se necesita un middleware mediante el entorno de libre distribución (GNU) multiplataforma. Como IDE se ha utilizado Processing, un entorno open source creado para proyectos de diseño. Además se ha utilizado el contenedor SimpleOpenNI, desarrollado por estudiantes e investigadores que trabajan con Kinect. Esto ofrece la posibilidad de prescindir del SDK de Microsoft, el cual es propietario y obliga a utilizar su sistema operativo, Windows. Usando estas herramientas se consigue una solución viable para varios sistemas operativos. Se han utilizado métodos y facilidades que ofrece el lenguaje orientado a objetos Java (Proccesing hereda de este), y se ha planteado una solución basada en un modelo cliente servidor que dota de escalabilidad al proyecto. El resultado del proyecto es útil en aplicaciones para poblaciones con riesgo de exclusión (como es el espectro autista), en telediagnóstico, y en general entornos donde se necesite estudiar hábitos y comportamientos a partir del movimiento humano. Con este proyecto se busca tener una continuidad mediante otras aplicaciones que analicen los datos ofrecidos. ABSTRACT. The Kinect camera is developed by PrimeSense in collaboration with Microsoft for the xBox console provides depth images thanks to an infrared sensor. This device also includes an RGB camera that provides color images in addition to a number of microphones placed such that they are able to know what angle the sound comes. Kinect initially created for domestic leisure but its low prices (compared to other cameras with the same characteristics) and acceptance by developers have exploited its possibilities. The objective of this project is based on this data to obtain kinematic variables such as position, velocity and acceleration of certain control points of the body of an individual from which to extract movement patterns. These points can be the head, neck, shoulders, elbows, wrists, hips, knees and ankles. This requires a middleware using freely distributed environment (GNU) platform. Processing has been used as a development environment, and open source environment created for design projects. Besides the container SimpleOpenNi has been used, it developed by students and researchers working with Kinect. This offers the possibility to dispense with the Microsoft SDK which owns and agrees to use its operating system, Windows. Using these tools will get a viable solution for multiple operating systems. We used methods and facilities of the Java object-oriented language (Processing inherits from this) and has proposed a solution based on a client-server model which provides scalability to the project. The result of the project is useful in applications to populations at risk of exclusion (such as autistic spectrum), in remote diagnostic, and in general environments that need study habits and behaviors from human motion. This project aims to have continuity using other applications to analyze the data provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El siguiente trabajo abarca todo el proceso llevado a cabo para el rediseño de un sistema automático de tutoría que se integra con laboratorios virtuales desarrollados para la realización de prácticas por parte de estudiantes dentro de entornos virtuales tridimensionales. Los principales objetivos de este rediseño son la mejora del rendimiento del sistema automático de tutoría, haciéndolo más eficiente y por tanto permitiendo a un mayor número de estudiantes realizar una práctica al mismo tiempo. Además, este rediseño permitirá que el tutor se pueda integrar con otros motores gráficos con un coste relativamente bajo. Se realiza en primer lugar una introducción a los principales conceptos manejados en este trabajo así como algunos aspectos relacionados con trabajos previos a este rediseño del tutor automático, concretamente la versión anterior del tutor ligada a la plataforma OpenSim. Acto seguido se detallarán qué requisitos funcionales cumplirá así como las ventajas que aportará este nuevo diseño. A continuación, se explicará el desarrollo del trabajo donde se podrá ver cómo se ha reestructurado el antiguo sistema de tutoría, la aplicación de un diseño orientado a objetos y los distintos paquetes y clases que lo conforman. Por último, se detallarán las conclusiones obtenidas durante el desarrollo del trabajo así como la implicación del trabajo aquí mostrado en futuros desarrollos.---ABSTRACT--- The following work shows the process that has been carried out in order to redesign an automatic tutoring system that can be integrated into virtual laboratories developed for supporting students’ practices in 3D virtual environments. The main goals of this redesign are the improvement of automatic tutoring system performance, making it more efficient and therefore allowing more students to perform a practice at the same time. Furthermore, this redesign allows the tutor to be integrated with other graphic engines with a relative low cost. Firstly, we begin with an introduction to the main concepts used in this work and some aspects concerning the related previous works to this automatic tutoring system redesign, such as the previous version of the tutoring system bound to OpenSim. Secondly, it will be detailed what functional requirements are met and what advantages this new tutoring system will provide. Next, it will be explained how this work has been developed, how the previous tutoring system has been restructured, how an object-oriented design is applied and the classes and packages derived from this design. Finally, it will be outlined the conclusions drawn in the development of this work as well as how this work will take part in future works.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las compañías de desarrollo de software buscan reducir costes a través del desarrollo de diseños que permitan: a) facilidad en la distribución del trabajo de desarrollo, con la menor comunicación de las partes; b) modificabilidad, permitiendo realizar cambios sobre un módulo sin alterar las otras partes y; c) comprensibilidad, permitiendo estudiar un módulo del sistema a la vez. Estas características elementales en el diseño de software se logran a través del diseño de sistemas cuasi-descomponibles, cuyo modelo teórico fue introducido por Simon en su búsqueda de una teoría general de los sistemas. En el campo del diseño de software, Parnas propone un camino práctico para lograr sistemas cuasi-descomponibles llamado el Principio de Ocultación de Información. El Principio de Ocultación de Información es un criterio diferente de descomposición en módulos, cuya implementación logra las características deseables de un diseño eficiente a nivel del proceso de desarrollo y mantenimiento. El Principio y el enfoque orientado a objetos se relacionan debido a que el enfoque orientado a objetos facilita la implementación del Principio, es por esto que cuando los objetos empiezan a tomar fuerza, también aparecen paralelamente las dificultades en el aprendizaje de diseño de software orientado a objetos, las cuales se mantienen hasta la actualidad, tal como se reporta en la literatura. Las dificultades en el aprendizaje de diseño de software orientado a objetos tiene un gran impacto tanto en las aulas como en la profesión. La detección de estas dificultades permitirá a los docentes corregirlas o encaminarlas antes que éstas se trasladen a la industria. Por otro lado, la industria puede estar advertida de los potenciales problemas en el proceso de desarrollo de software. Esta tesis tiene como objetivo investigar sobre las dificultades en el diseño de software orientado a objetos, a través de un estudio empírico. El estudio fue realizado a través de un estudio de caso cualitativo, que estuvo conformado por tres partes. La primera, un estudio inicial que tuvo como objetivo conocer el entendimiento de los estudiantes alrededor del Principio de Ocultación de Información antes de que iniciasen la instrucción. La segunda parte, un estudio llevado a cabo a lo largo del período de instrucción con la finalidad de obtener las dificultades de diseño de software y su nivel de persistencia. Finalmente, una tercera parte, cuya finalidad fue el estudio de las dificultades esenciales de aprendizaje y sus posibles orígenes. Los participantes de este estudio pertenecieron a la materia de Software Design del European Master in Software Engineering de la Escuela Técnica Superior de Ingenieros Informáticos de la Universidad Politécnica de Madrid. Los datos cualitativos usados para el análisis procedieron de las observaciones en las horas de clase y exposiciones, entrevistas realizadas a los estudiantes y ejercicios enviados a lo largo del período de instrucción. Las dificultades presentadas en esta tesis en sus diferentes perspectivas, aportaron conocimiento concreto de un estudio de caso en particular, realizando contribuciones relevantes en el área de diseño de software, docencia, industria y a nivel metodológico. ABSTRACT The software development companies look to reduce costs through the development of designs that will: a) ease the distribution of development work with the least communication between the parties; b) changeability, allowing to change a module without disturbing the other parties and; c) understandability, allowing to study a system module at a time. These basic software design features are achieved through the design of quasidecomposable systems, whose theoretical model was introduced by Simon in his search for a general theory of systems. In the field of software design, Parnas offers a practical way to achieve quasi-decomposable systems, called The Information Hiding Principle. The Information Hiding Principle is different criterion for decomposition into modules, whose implementation achieves the desirable characteristics of an efficient design at the development and maintenance level. The Principle and the object-oriented approach are related because the object-oriented approach facilitates the implementation of The Principle, which is why when objects begin to take hold, also appear alongside the difficulties in learning an object-oriented software design, which remain to this day, as reported in the literature. Difficulties in learning object-oriented software design has a great impact both in the classroom and in the profession. The detection of these difficulties will allow teachers to correct or route them before they move to the industry. On the other hand, the industry can be warned of potential problems related to the software development process. This thesis aims to investigate the difficulties in learning the object-oriented design, through an empirical study. The study was conducted through a qualitative case study, which consisted of three parts. The first, an initial study was aimed to understand the knowledge of the students around The Information Hiding Principle before they start the instruction. The second part, a study was conducted during the entire period of instruction in order to obtain the difficulties of software design and their level of persistence. Finally, a third party, whose purpose was to study the essential difficulties of learning and their possible sources. Participants in this study belonged to the field of Software Design of the European Master in Software Engineering at the Escuela Técnica Superior de Ingenieros Informáticos of Universidad Politécnica de Madrid. The qualitative data used for the analysis came from the observations in class time and exhibitions, performed interviews with students and exercises sent over the period of instruction. The difficulties presented in this thesis, in their different perspectives, provided concrete knowledge of a particular case study, making significant contributions in the area of software design, teaching, industry and methodological level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los llamados procesos creativos en general, y los del proyectar arquitectónico en particular, mantienen aproximaciones hacia el objeto centradas principalmente en el procedimiento, es decir, en lo estratégico, lo metodológico o/y lo paradigmático. En ellas, además, el potencial de información no suele ser completo ni contemplado o, si lo ha sido, de manera inconciente, o referido de nuevo a lo procedimental. Igualmente, se centra el interés de estas aproximaciones, o en el objeto propuesto o resultado, o en lo procesal, pero sin atender a su constitución, es decir, a la información misma. Por tanto, y como reclama la física, la base constituyente informacional de estas aproximaciones, no ha sido considerada hasta ahora, ni se ha intentado sistematizar. Junto a esta omisión, estos acercamientos no permiten que cada humano configure de manera autónoma, independiente e íntegramente su propio proceso pues, los comentados procedimientos, están apoyados en marcos contextuales, culturales o/y procesales, reflejando así una orientación limitada en espacio-tiempo. Es así que se propone una potencia, o “aquellas cualidades poseídas por las cosas en cuya virtud éstas son totalmente impasibles o inmutables, o no se dejan cambiar fácilmente…”, según la define Aristóteles en “Metafísica”, como la posibilidad de, a la vez, aludir a un abanico informacional completo y apuntar hacia la íntegra elaboración de un proceso personal propio. Desde lo informacional, que a su vez es energético dependiendo de la disciplina científica considerada, se diferencian, en primer lugar, unos atributos o términos mínimos, que son unas potencias que compendian el abanico informacional completo. Es decir, son mínimos máximos. Estos atributos forman la fase cualitativa de la información a la que se llama acompañamiento. En segundo lugar, y apoyado tanto en el funcionamiento cerebral, en el espacio-tiempo cuántico, como en los nuevos experimentos e investigaciones de la biología, la física y la neurociencia en especial, se abren líneas nuevas para acceder a la información, antes contemplada de manera lineal, local y como entidad separada. Por ello, esta segunda aproximación presenta una potencia de intensificación de datos, que permite un aumento de los mismos en el cerebro y, por ello, la posibilidad de evitar el problema del “papel en blanco”. A esta fase se la nombra promoción. En tercer lugar, ambas fases constituyen la generación como propuesta de tesis, siendo la misma un cambio de cualquier tipo en el que alguien es agente de algo, específicamente, cuando un humano es mediador entre sucesos. Fusionando ambas, se añade simultáneamente una con-formación potencial global, que es sinérgicamente más que la suma de las dos anteriores. De esta manera agrupadora, y con objeto de materializar y sistematizar ahora esta generación o potencia, se presenta una puesta en práctica. Para ello, se desarrolla un modelo analítico-geométrico-paramétrico y se expone su aplicación en dicho caso práctico. Además, dicho modelo presenta un funcionamiento autorreferido u holográfico, reflejando tanto a los estudios científicos revisados, como al propio funcionamiento de los atributos o/y de todas las potencias que se presentan en esta investigación. ABSTRACT Generally speaking the so-called creative processes, and particularly those of the architectural design, keep approaches into the object oriented mainly in the process, so to speak, into the strategical, the methodological and/ or into the paradigmatic. In addition, they don’t usually take into account the potential of information neither in a complete manner nor even contemplated or, if considered, worked out unconsciously, or referred back to the procedural. Similarly, the interest of these approaches is focused either in the proposed object or the output, or in the processual, but leaving their constituent out, being it the information itself. Therefore, as physics is claiming nowadays, the constituent core of these approaches have neither been taken into account so far, nor tried to systematize. Along with this omission, these approaches do not allow each human being to set up autonomously, independently and entirely her/ his own process, because the mentioned procedures are supported by contextual, cultural and/ or procedural frameworks, reflecting then a perspective limited in space-time. Thus a potency is proposed, or "those qualities possessed by things under which they are totally impassive or immutable, or are not easily changed...", as defined by Aristotle in "Metaphysics", as the possibility to, and at the same time, alluding to a full informational range and point out to a whole development of an own personal process. From the informational stand, which in turn is energetic depending on the scientific discipline considered, it is distinguished, in the first place, a minimum set of attributes or terms, which are potencies that summarize the full informational range. That is, they are maximum minimums. These attributes build up the qualitative phase of the information being called accompaniment. Secondly, and supported in the brain functioning, in quantum space-time, as in new experiments and research carried out by biology, physics and neuroscience especially, new lines to access information are being opened, being contemplated linearly, locally and as a detached entity before. Thus, this second approach comes up with a potency of data`s intensifying that allows an increase in the brain thereof and, therefore, the possibility of avoiding the problem of "the blank paper". Promotion is how this phase is appointed. In the third place, both phases form the generation as the dissertation proposal, being it a change of any kind in which someone is the agent of something, specifically, when a human being is the mediator in between events. Fusing both of them, a global potential formation-with is added simultaneously, which is synergistically greater than the sum of the previous two. In this grouping way, and now in order to materialize and systemize this generation or potency, an implementation is displayed. To this end, an analytical-geometrical-parametrical model is developed and put into practice as a case study. In addition, this model features a self-referral or holographic functioning, being aligned to both scientific reviewed studies, and the very functioning either of the attributes and/ or all the potencies that are introduced in this research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Zebrafish Information Network, ZFIN, is a WWW community resource of zebrafish genetic, genomic and developmental research information (http://zfin.org). ZFIN provides an anatomical atlas and dictionary, developmental staging criteria, research methods, pathology information and a link to the ZFIN relational database (http://zfin.org/ZFIN/). The database, built on a relational, object-oriented model, provides integrated information about mutants, genes, genetic markers, mapping panels, publications and contact information for the zebrafish research community. The database is populated with curated published data, user submitted data and large dataset uploads. A broad range of data types including text, images, graphical representations and genetic maps supports the data. ZFIN incorporates links to other genomic resources that provide sequence and ortholog data. Zebrafish nomenclature guidelines and an automated registration mechanism for new names are provided. Extensive usability testing has resulted in an easy to learn and use forms interface with complex searching capabilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

GOBASE (http://megasun.bch.umontreal.ca/gobase/) is a network-accessible biological database, which is unique in bringing together diverse biological data on organelles with taxonomically broad coverage, and in furnishing data that have been exhaustively verified and completed by experts. So far, we have focused on mitochondrial data: GOBASE contains all published nucleotide and protein sequences encoded by mitochondrial genomes, selected RNA secondary structures of mitochondria-encoded molecules, genetic maps of completely sequenced genomes, taxonomic information for all species whose sequences are present in the database and organismal descriptions of key protistan eukaryotes. All of these data have been integrated and organized in a formal database structure to allow sophisticated biological queries using terms that are inherent in biological concepts. Most importantly, data have been validated, completed, corrected and standardized, a prerequisite of meaningful analysis. In addition, where critical data are lacking, such as genetic maps and RNA secondary structures, they are generated by the GOBASE team and collaborators, and added to the database. The database is implemented in a relational database management system, but features an object-oriented view of the biological data through a Web/Genera-generated World Wide Web interface. Finally, we have developed software for database curation (i.e. data updates, validation and correction), which will be described in some detail in this paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Um dos grandes desafios enfrentados pelos fabricantes de turbinas hidráulicas é prevenir o aparecimento de vibrações induzidas pelo escoamento nas travessas do pré-distribuidor e pás do rotor. Considerando apenas as travessas, e atribuídos a tais vibrações, foram relatados 28 casos de trincas ou ruídos anormais nas últimas décadas, que acarretaram enormes prejuízos associados a reparos, atrasos e perda de geração. O estado da arte na prevenção destes problemas baseia-se na utilização de sofisticados, e caros, programas comerciais de dinâmica dos fluidos computacional para o cálculo transiente do fenômeno. Este trabalho faz uma ampla revisão bibliográfica e levantamento de eventos de trincas ou ruídos ocorridos em travessas nos últimos 50 anos. Propõe, então, um enfoque alternativo, baseado exclusivamente em ferramentas de código aberto. A partir de hipóteses simplificadoras devidamente justificadas, o problema é formulado matematicamente de forma bidimensional, no plano da seção transversal da travessa, levando em conta a interação fluido-estrutura. Nesta estratégia, as equações de Navier-Stokes são resolvidas pelo método dos elementos finitos por meio da biblioteca gratuita oomph-lib. Um código especial em C++ é desenvolvido para o problema de interação fluido-estrutura, no qual o fenômeno de turbulência é levado em consideração por meio de um algoritmo baseado no modelo de Baldwin-Lomax. O método proposto é validado por meio da comparação dos resultados obtidos com referências e medições disponíveis na literatura, que tratam de problemas de barras retangulares suportadas elasticamente. O trabalho finaliza com a aplicação do método a um estudo de caso envolvendo uma travessa particular.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Comunicación presentada en las V Jornadas Iberoamericanas de Ingeniería de Requisitos y Ambientes Software (IDEAS’02), La Habana, Cuba, abril 2002.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Comunicación presentada en las VII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2002), dentro del II Taller sobre Ingeniería del Software Orientada al Web (Web Engineering) WebE'2002, El Escorial, Madrid, 19 noviembre 2002.