919 resultados para Visualization Using Computer Algebra Tools
Resumo:
Una gran parte de la producción académica desarrollada en América Latina desde la disciplina de las relaciones internacionales y, en particular, desde el campo de la política exterior, se ha concentrado en la autonomía. En efecto, reconocidos académicos se han aproximado a este concepto utilizando diversos enfoques teóricos en la búsqueda de su posible aplicación a la realidad latinoamericana. Este artículo pretende estudiar las principales propuestas de autonomía; para tal fin se utilizan las herramientas analíticas que proporciona la discusión entre racionalistas y reflectivistas. Este debate se remonta a los aspectos más profundos del proceso de elaboración del conocimiento, el método y la visión de mundo. Aquello que subyace a los supuestos o premisas de toda teoría y que la sostiene de principio a fin. La intención de ello es superar las reflexiones tradicionales que se enfocan en clasificar dichas nociones en virtud de su afiliación teórica o alrededor de supuestos similares. De esta manera, se presenta un estado del arte con una nueva perspectiva analítica que desnuda y cuestiona nuestra forma tradicional de comprender la realidad internacional. Lo anterior permite descubrir los efectos de adoptar métodos y visiones de mundo específicas para resolver problemas que revisten una enorme complejidad y que exigen ampliar el espectro analítico y teórico para su solución.-----A major portion of the academic production developed in Latin America by the international affairs discipline and, in particular, by the foreign policy field, has focused on the autonomy. Indeed, renowned scholars have approached this concept by using different theoretical methods in search of the potential application to the Latin American reality. The article intends to study the main autonomy proposals, using the analytic tools provided by the discussion between rationalists and reflectivists. This debate addresses the deepest aspects of the knowledge creation, the method and the world vision; what underlies the assumptions or premises of any theory and supports it from end to end. The purpose is to overcome the traditional thoughts focused on classifying such notions according to their theoretical affiliations or around similar assumptions. In this way, a state-of-the-art is submitted with a new analytic perspective, which unveils and challenges our traditional way of understanding the international reality. The foregoing allows finding out the effects of adopting methods and world visions specific for solving problems of huge complexity which demand the expansion of the analytical and theoretical spectrum for such purpose.
Resumo:
La butirilcolinesterasa humana (BChE; EC 3.1.1.8) es una enzima polimórfica sintetizada en el hígado y en el tejido adiposo, ampliamente distribuida en el organismo y encargada de hidrolizar algunos ésteres de colina como la procaína, ésteres alifáticos como el ácido acetilsalicílico, fármacos como la metilprednisolona, el mivacurium y la succinilcolina y drogas de uso y/o abuso como la heroína y la cocaína. Es codificada por el gen BCHE (OMIM 177400), habiéndose identificado más de 100 variantes, algunas no estudiadas plenamente, además de la forma más frecuente, llamada usual o silvestre. Diferentes polimorfismos del gen BCHE se han relacionado con la síntesis de enzimas con niveles variados de actividad catalítica. Las bases moleculares de algunas de esas variantes genéticas han sido reportadas, entre las que se encuentra las variantes Atípica (A), fluoruro-resistente del tipo 1 y 2 (F-1 y F-2), silente (S), Kalow (K), James (J) y Hammersmith (H). En este estudio, en un grupo de pacientes se aplicó el instrumento validado Lifetime Severity Index for Cocaine Use Disorder (LSI-C) para evaluar la gravedad del consumo de “cocaína” a lo largo de la vida. Además, se determinaron Polimorfismos de Nucleótido Simple (SNPs) en el gen BCHE conocidos como responsables de reacciones adversas en pacientes consumidores de “cocaína” mediante secuenciación del gen y se predijo el efecto delos SNPs sobre la función y la estructura de la proteína, mediante el uso de herramientas bio-informáticas. El instrumento LSI-C ofreció resultados en cuatro dimensiones: consumo a lo largo de la vida, consumo reciente, dependencia psicológica e intento de abandono del consumo. Los estudios de análisis molecular permitieron observar dos SNPs codificantes (cSNPs) no sinónimos en el 27.3% de la muestra, c.293A>G (p.Asp98Gly) y c.1699G>A (p.Ala567Thr), localizados en los exones 2 y 4, que corresponden, desde el punto de vista funcional, a la variante Atípica (A) [dbSNP: rs1799807] y a la variante Kalow (K) [dbSNP: rs1803274] de la enzima BChE, respectivamente. Los estudios de predicción In silico establecieron para el SNP p.Asp98Gly un carácter patogénico, mientras que para el SNP p.Ala567Thr, mostraron un comportamiento neutro. El análisis de los resultados permite proponer la existencia de una relación entre polimorfismos o variantes genéticas responsables de una baja actividad catalítica y/o baja concentración plasmática de la enzima BChE y algunas de las reacciones adversas ocurridas en pacientes consumidores de cocaína.
Resumo:
Monográfico con el título: 'Educación matemática y tecnologías de la información'. Resumen basado en el de la publicación
Resumo:
Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package Maple®.
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
The infrared spectrum of the stretching fundamentals of SiF2 has been obtained at a resolution of ≈ 0.1 cm−1 using a FTIR spectrometer. The spectrum has been analysed using computer simulation based on a coupled hamiltonian for v1 and v3, giving v1 = 855.01 cm−1 and v3 = 870.40 cm−1. The relative magnitude and sign of the vibrational transition moments has been determined from the ξC13 Coriolis coupling.
Resumo:
It is problematic to use standard ontology tools when describing vague domains. Standard ontologies are designed to formally define one view of a domain, and although it is possible to define disagreeing statements, it is not advisable, as the resulting inferences could be incorrect. Two different solutions to the above problem in two different vague domains have been developed and are presented. The first domain is the knowledge base of conversational agents (chatbots). An ontological scripting language has been designed to access ontology data from within chatbot code. The solution developed is based on reifications of user statements. It enables a new layer of logics based on the different views of the users, enabling the body of knowledge to grow automatically. The second domain is competencies and competency frameworks. An ontological framework has been developed to model different competencies using the emergent standards. It enables comparison of competencies using a mix of linguistic logics and descriptive logics. The comparison results are non-binary, therefore not simple yes and no answers, highlighting the vague nature of the comparisons. The solution has been developed with small ontologies which can be added to and modified in order for the competency user to build a total picture that fits the user’s purpose. Finally these two approaches are viewed in the light of how they could aid future work in vague domains, further work in both domains is described and also in other domains such as the semantic web. This demonstrates two different approaches to achieve inferences using standard ontology tools in vague domains.
Resumo:
There is a growing concern in reducing greenhouse gas emissions all over the world. The U.K. has set 34% target reduction of emission before 2020 and 80% before 2050 compared to 1990 recently in Post Copenhagen Report on Climate Change. In practise, Life Cycle Cost (LCC) and Life Cycle Assessment (LCA) tools have been introduced to construction industry in order to achieve this such as. However, there is clear a disconnection between costs and environmental impacts over the life cycle of a built asset when using these two tools. Besides, the changes in Information and Communication Technologies (ICTs) lead to a change in the way information is represented, in particular, information is being fed more easily and distributed more quickly to different stakeholders by the use of tool such as the Building Information Modelling (BIM), with little consideration on incorporating LCC and LCA and their maximised usage within the BIM environment. The aim of this paper is to propose the development of a model-based LCC and LCA tool in order to provide sustainable building design decisions for clients, architects and quantity surveyors, by then an optimal investment decision can be made by studying the trade-off between costs and environmental impacts. An application framework is also proposed finally as the future work that shows how the proposed model can be incorporated into the BIM environment in practise.
Resumo:
In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.
Resumo:
The urban heat island (UHI) is a well-known effect of urbanisation and is particularly important in world megacities. Overheating in such cities is expected to be exacerbated in the future as a result of further urban growth and climate change. Demonstrating and quantifying the impact of individual design interventions on the UHI is currently difficult using available software tools. The tools developed in the LUCID (‘The Development of a Local Urban Climate Model and its Application to the Intelligent Design of Cities’) research project will enable the related impacts to be better understood, quantified and addressed. This article summarises the relevant literature and reports on the ongoing work of the project.
Resumo:
Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if so, this was primarily restricted to the study of single articulators. If AOS reflects a basic neuromotor dysfunction, this should somehow be evident in the production of both dysfluent and perceptually fluent speech. The current study compared motor control strategies for the production of perceptually fluent speech between a young woman with apraxia of speech (AOS) and Broca’s aphasia and a group of age-matched control speakers using concepts and tools from articulation-based theories. In addition, to examine the potential role of specific movement variables on gestural coordination, a second part of this study involved a comparison of fluent and dysfluent speech samples from the speaker with AOS. Movement data from the lips, jaw and tongue were acquired using the AG-100 EMMA system during the reiterated production of multisyllabic nonwords. The findings indicated that although in general kinematic parameters of fluent speech were similar in the subject with AOS and Broca’s aphasia to those of the age-matched controls, speech task-related differences were observed in upper lip movements and lip coordination. The comparison between fluent and dysfluent speech characteristics suggested that fluent speech was achieved through the use of specific motor control strategies, highlighting the potential association between the stability of coordinative patterns and movement range, as described in Coordination Dynamics theory.
Resumo:
Traditionally representation of competencies has been very difficult using computer-based techniques. This paper introduces competencies, how they are represented, and the related concept of competency frameworks and the difficulties in using traditional ontology techniques to formalise them. A “vaguely” formalised framework has been developed within the EU project TRACE and is presented. The framework can be used to represent different competencies and competency frameworks. Through a case study using an example from the IT sector, it is shown how these can be used by individuals and organisations to specify their individual competency needs. Furthermore it is described how these representations are used for comparisons between different specifications applying ontologies and ontology toolsets. The end result is a comparison that is not binary, but tertiary, providing “definite matches”, possible / partial matches, and “no matches” using a “traffic light” analogy.
Resumo:
This paper presents a quantitative evaluation of a tracking system on PETS 2015 Challenge datasets using well-established performance measures. Using the existing tools, the tracking system implements an end-to-end pipeline that include object detection, tracking and post- processing stages. The evaluation results are presented on the provided sequences of both ARENA and P5 datasets of PETS 2015 Challenge. The results show an encouraging performance of the tracker in terms of accuracy but a greater tendency of being prone to cardinality error and ID changes on both datasets. Moreover, the analysis show a better performance of the tracker on visible imagery than on thermal imagery.
Resumo:
Background— T NADPH oxidase, by generating reactive oxygen species, is involved in the pathophysiology of many cardiovascular diseases and represents a therapeutic target for the development of novel drugs. A single-nucleotide polymorphism (SNP) C242T of the p22phox subunit of NADPH oxidase has been reported to be negatively associated with coronary heart disease (CHD) and may predict disease prevalence. However, the underlying mechanisms remain unknown. Methods and Results— Using computer molecular modelling we discovered that C242T SNP causes significant structural changes in the extracellular loop of p22phox and reduces its interaction stability with Nox2 subunit. Gene transfection of human pulmonary microvascular endothelial cells showed that C242T p22phox reduced significantly Nox2 expression but had no significant effect on basal endothelial O2.- production or the expression of Nox1 and Nox4. When cells were stimulated with TNFα (or high glucose), C242T p22phox inhibited significantly TNFα-induced Nox2 maturation, O2.- production, MAPK and NFκB activation and inflammation (all p<0.05). These C242T effects were further confirmed using p22phox shRNA engineered HeLa cells and Nox2-/- coronary microvascular endothelial cells. Clinical significance was investigated using saphenous vein segments from non CHD subjects after phlebectomies. TT (C242T) allele was common (prevalence of ~22%) and compared to CC, veins bearing TT allele had significantly lower levels of Nox2 expression and O2.- generation in response to high glucose challenge. Conclusions— C242T SNP causes p22phox structural changes that inhibit endothelial Nox2 activation and oxidative response to TNFα or high glucose stimulation. C242T SNP may represent a natural protective mechanism against inflammatory cardiovascular diseases.
Resumo:
Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e. g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework`s applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.