859 resultados para graphical attractiveness
Resumo:
In human Population Genetics, routine applications of principal component techniques are often required. Population biologists make widespread use of certain discrete classifications of human samples into haplotypes, the monophyletic units of phylogenetic trees constructed from several single nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypes are recorded within the different samples. Principal component techniques are then required as a dimension-reducing strategy to bring the dimension of the problem to a manageable level, say two, to allow for graphical analysis. Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principal components. In this short note we present our experience with using traditional linear principal components or compositional principal components based on logratios, with reference to a specific dataset
Resumo:
It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features
Resumo:
The statistical analysis of compositional data should be treated using logratios of parts, which are difficult to use correctly in standard statistical packages. For this reason a freeware package, named CoDaPack was created. This software implements most of the basic statistical methods suitable for compositional data. In this paper we describe the new version of the package that now is called CoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©), Visual Basic and Open GL, and it is oriented towards users with a minimum knowledge of computers with the aim at being simple and easy to use. This new version includes new graphical output in 2D and 3D. These outputs could be zoomed and, in 3D, rotated. Also a customization menu is included and outputs could be saved in jpeg format. Also this new version includes an interactive help and all dialog windows have been improved in order to facilitate its use. To use CoDaPack one has to access Excel© and introduce the data in a standard spreadsheet. These should be organized as a matrix where Excel© rows correspond to the observations and columns to the parts. The user executes macros that return numerical or graphical results. There are two kinds of numerical results: new variables and descriptive statistics, and both appear on the same sheet. Graphical output appears in independent windows. In the present version there are 8 menus, with a total of 38 submenus which, after some dialogue, directly call the corresponding macro. The dialogues ask the user to input variables and further parameters needed, as well as where to put these results. The web site http://ima.udg.es/CoDaPack contains this freeware package and only Microsoft Excel© under Microsoft Windows© is required to run the software. Kew words: Compositional data Analysis, Software
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariables with some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependence of a composition with a categorical variable, a colored set of ternary diagrams might be a good idea for a first look at the data, but it will fast hide important aspects if the composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if the conventional, black-box ilr is used. Thinking on terms of the Euclidean structure of the simplex, we suggest to set up appropriate projections, which on one side show the compositional geometry and on the other side are still comprehensible by a non-expert analyst, readable for all locations and scales of the data. This is e.g. done by defining special balance displays with carefully- selected axes. Following this idea, we need to systematically ask how to display, explore, describe, and test the relation to complementary or explanatory data of categorical, real, ratio or again compositional scales. This contribution shows that it is sufficient to use some basic concepts and very few advanced tools from multivariate statistics (principal covariances, multivariate linear models, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariate analysis
Resumo:
This paper presents a Graphical User Interface, developed with python and the graphic library wxpython, to GRASS GIS. This GUI allows to access several modules with a graphic interface written in Spanish. Its main purpouse is to be a teaching tool, that is the reason way it only allows to access several basic put crucial moludes. It also allows user to organize the elements presented to stress the aspects to be resalted in a particular working sesion with the program
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This paper overviews the field of graphical simulators used for AUV development, presents the taxonomy of these applications and proposes a classification. It also presents Neptune, a multivehicle, real-time, graphical simulator based on OpenGL that allows hardware in the loop simulations
Resumo:
The JModel suite consists of a number of models of aspects of the Earth System. They can all be run from the JModels website. They are written in the Java language for maximum portability, and are capable of running on most computing platforms including Windows, MacOS and Unix/Linux. The models are controlled via graphical user interfaces (GUI), so no knowledge of computer programming is required to run them. The models currently available from the JModels website are: Ocean phosphorus cycle Ocean nitrogen and phosphorus cycles Ocean silicon and phosphorus cycles Ocean and atmosphere carbon cycle Energy radiation balance model (under development) The main purpose of the models is to investigate how material and energy cycles of the Earth system are regulated and controlled by different feedbacks. While the central focus is on these feedbacks and Earth System stabilisation, the models can also be used in other ways. These resources have been developed by: National Oceanography Centre, Southampton project led by Toby Tyrrell and Andrew Yool, focus on how the Earth system works.
Resumo:
Matlab is a high level language that is very easy to use and very powerful. It comes with a wealth of libraries and toolboxes, that you can use directly, so that you don't need to program low level functions. It enables you to display results very easily on graphs and images. To get started with it, you need to understand how to manipulate and represent data, and how to find information about the available functions. During this self-study tutorial, you will learn: 1- How to start Matlab. 2- How you can find out all the information you need. 3- How to create simple vectors and matrices. 4- What functions are available and how to find them. 5- How to plot graphs of functions. 6- How to write a script. After this (should take about an hour), you will know most of what you need to know about Matlab and should definitely know how to go on learning about it on your own…
Resumo:
These resources are designed to support students in gaining more confidence with using Matlab. The PDFs provide guidance and information; Objectives: Introduce basic syntax and data preparation for graphing with Matlab by providing some data, examples of code and some background documents. Outcomes: -how to write an m file script -the importance of syntax -how to load files -how to produce simple graphs -where to get help and further examples There are also some data files to provide example data for students to work with in producing Matlab resources.
Resumo:
El presente trabajo se centra en el fenómeno de la internacionalización dentro del sector petrolero. Para esto, se escogió a tres empresas: Ecopetrol, por ser la empresa colombiana más grande; Petrobras, el mayor representante de Latinoamérica; y Exxon Mobil, un gigante del petróleo a nivel mundial. Estas empresas, que se encuentran en diferentes etapas dentro de su proceso de la internacionalización, muestran comportamientos estratégicos similares. Son precisamente estas similitudes las que permitieron proponer un modelo de internacionalización generalizado para las diferentes empresas que componen dicho sector económico. Para alcanzar dicho modelo, se recurrió a diferentes teorías de internacionalización desarrolladas por varias escuelas de negocios en el mundo, tales como el Modelo Ecléctico, el de Uppsala o la Teoría de Redes. Cabe destacar que dicho modelo propuesto es una aproximación teórica a la realidad empresarial de las compañías petroleras, usando como marco de referencia una muestra pequeña de este tipo de organizaciones. Dentro de este modelo, los altos matices de complejidad propios del fenómeno de la internacionalización se ven reducidos de manera considerable, como parte del ejercicio académico propuesto en el presente estudio.
Resumo:
El estudio estratégico sectorial desarrollado en cinco Hospitales Universitarios de la ciudad de Bogotá D.C., presenta como objetivo general, la aplicación de la metodología de análisis estructural de sectores estratégicos con el propósito de conocer el nivel de hacinamiento del sector, las oportunidades para su desarrollo, el análisis de las fuerzas de mercado y el nivel de atractividad sectorial. El estudio concluye que la tendencia del sector se encuentra hacia la presencia de equilibrio competitivo y la no existencia de factores diferenciadores entre las organizaciones estudiadas. El análisis sectorial detecta la necesidad de crear oportunidades de desarrollo que permitan a las organizaciones salir de los procesos de imitación en los cuales están inmersos. Finalmente, se resalta la necesidad de fortalecer las relaciones y alianzas entre los competidores, proveedores y compradores.
Resumo:
El estudio sectorial aplicado a una Empresa Social del Estado (E.S.E) en la ciudad de Bogotá, se desarrolló a través de la aplicación de la metodología de análisis estructural de sectores estratégicos a fin de determinar el grado de hacinamiento en el que se encuentra, ubicar las oportunidades que aún no han explotado, conocer el nivel de atractividad del sector e identificar el comportamiento de los competidores para finalmente caracterizar el sector estratégico en términos de los elementos que determinan su comportamiento. Como resultado de las pruebas se evidenciaron cuatro elementos determinantes del comportamiento del sector, ausencia de barreras de entrada propias, existencia de supuestos que limitan la toma de decisiones, existencia de convergencia estratégica y erosión de la estrategia y de la productividad. Estos hallazgos permitieron al equipo de investigación realizar aportes con el fin de generar reflexión estratégica a la E.S.E, ampliar el panorama e implementar innovación conceptual hacia comportamientos diferenciadores que busquen mejorar la cadena de valor.
Resumo:
Se busca ver cuál es el uso que las organizaciones hacen del concepto y estrategias comunitarias mediante el marketing; aunque existen varias formas de fidelizar el cliente, en el sector de comercio al por mayor, se vio una gran dificultad de acercarse al consumidor. Una de las formas más fáciles y efectivas de hacerlo es por medio de comunidades ya que elimina las barreras de mercado y crea vínculos entre cliente-empresa. Para esto se utilizó el análisis de caso, seleccionando una organización de relevancia en el sector de venta al por mayor. En dicha organización no se encontró el uso de la relación estratégica comunitaria de forma directa, aunque se vio un interés en incorporarlo; esto se evaluó mediante fuentes de evidencia, tales como entrevistas, informes financieros de la compañía, documentos que proporciono la organizaciones, pagina web e información del comercio de venta al por mayor y fundamentación teórica para realizar un óptimo análisis. A la empresa se le recomienda incorporar en sus planes estratégicos la relación estratégica comunitaria; esto ayudaría a eliminar barreras de mercado, identificar las necesidades o nuevos nichos de mercado.