933 resultados para abstract data type


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, I designed and implemented a virtual machine (VM) for a monomorphic variant of Athena, a type-omega denotational proof language (DPL). This machine attempts to maintain the minimum state required to evaluate Athena phrases. This thesis also includes the design and implementation of a compiler for monomorphic Athena that compiles to the VM. Finally, it includes details on my implementation of a read-eval-print loop that glues together the VM core and the compiler to provide a full, user-accessible interface to monomorphic Athena. The Athena VM provides the same basis for DPLs that the SECD machine does for pure, functional programming and the Warren Abstract Machine does for Prolog.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling and predicting co-occurrences of events is a fundamental problem of unsupervised learning. In this contribution we develop a statistical framework for analyzing co-occurrence data in a general setting where elementary observations are joint occurrences of pairs of abstract objects from two finite sets. The main challenge for statistical models in this context is to overcome the inherent data sparseness and to estimate the probabilities for pairs which were rarely observed or even unobserved in a given sample set. Moreover, it is often of considerable interest to extract grouping structure or to find a hierarchical data organization. A novel family of mixture models is proposed which explain the observed data by a finite number of shared aspects or clusters. This provides a common framework for statistical inference and structure discovery and also includes several recently proposed models as special cases. Adopting the maximum likelihood principle, EM algorithms are derived to fit the model parameters. We develop improved versions of EM which largely avoid overfitting problems and overcome the inherent locality of EM--based optimization. Among the broad variety of possible applications, e.g., in information retrieval, natural language processing, data mining, and computer vision, we have chosen document retrieval, the statistical analysis of noun/adjective co-occurrence and the unsupervised segmentation of textured images to test and evaluate the proposed algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A version of Matheron’s discrete Gaussian model is applied to cell composition data. The examples are for map patterns of felsic metavolcanics in two different areas. Q-Q plots of the model for cell values representing proportion of 10 km x 10 km cell area underlain by this rock type are approximately linear, and the line of best fit can be used to estimate the parameters of the model. It is also shown that felsic metavolcanics in the Abitibi area of the Canadian Shield can be modeled as a fractal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

”compositions” is a new R-package for the analysis of compositional and positive data. It contains four classes corresponding to the four different types of compositional and positive geometry (including the Aitchison geometry). It provides means for computation, plotting and high-level multivariate statistical analysis in all four geometries. These geometries are treated in an fully analogous way, based on the principle of working in coordinates, and the object-oriented programming paradigm of R. In this way, called functions automatically select the most appropriate type of analysis as a function of the geometry. The graphical capabilities include ternary diagrams and tetrahedrons, various compositional plots (boxplots, barplots, piecharts) and extensive graphical tools for principal components. Afterwards, ortion and proportion lines, straight lines and ellipses in all geometries can be added to plots. The package is accompanied by a hands-on-introduction, documentation for every function, demos of the graphical capabilities and plenty of usage examples. It allows direct and parallel computation in all four vector spaces and provides the beginner with a copy-and-paste style of data analysis, while letting advanced users keep the functionality and customizability they demand of R, as well as all necessary tools to add own analysis routines. A complete example is included in the appendix

Relevância:

30.00% 30.00%

Publicador:

Resumo:

First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by a simplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able to generate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow defining monitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organizing maps (Kohonen 1997) is a type of artificial neural network developed to explore patterns in high-dimensional multivariate data. The conventional version of the algorithm involves the use of Euclidean metric in the process of adaptation of the model vectors, thus rendering in theory a whole methodology incompatible with non-Euclidean geometries. In this contribution we explore the two main aspects of the problem: 1. Whether the conventional approach using Euclidean metric can shed valid results with compositional data. 2. If a modification of the conventional approach replacing vectorial sum and scalar multiplication by the canonical operators in the simplex (i.e. perturbation and powering) can converge to an adequate solution. Preliminary tests showed that both methodologies can be used on compositional data. However, the modified version of the algorithm performs poorer than the conventional version, in particular, when the data is pathological. Moreover, the conventional ap- proach converges faster to a solution, when data is \well-behaved". Key words: Self Organizing Map; Artificial Neural networks; Compositional data

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years, the use of every type of Digital Elevation Models has iimproved. The LiDAR (Light Detection and Ranging) technology, based on the scansion of the territory b airborne laser telemeters, allows the construction of digital Surface Models (DSM), in an easy way by a simple data interpolation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este trabajo se describe la naturaleza y secuencia de adquisición de las preguntas interrogativas parciales en niños de habla catalana y/o castellana dentro de un marco de análisis según el cual la adquisición de las estructuras lingüísticas se construye gradualmente desde estructuras concretas hasta estructuras más abstractas. La muestra utilizada se compone de 10 niños y niñas procedentes de corpus longitudinales cuyas edades van de los 17 meses a los 3 años. El análisis se ha realizado atendiendo a la estructura sintáctica de la oración, los errores, los pronombres y adverbios interrogativos, y la tipología verbal. Los resultados muestran que la secuencia de adquisición pasa por un momento inicial caracterizado por producciones estereotipadas o fórmulas, durante el cual sólo aparecen algunas partículas interrogativas en estructuras muy concretas. Posteriormente la interrogación aparece con otros pronombres y adverbios y se diversifica a otros verbos, además, no se observan errores en la construcción sintáctica. Estos resultados suponen un hecho diferencial respecto de estudios previos en lengua inglesa

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los programas de inmersión lingüística han constituido y constituyen dentro del Sistema Educativo catalán la principal forma para que el alumnado de lengua familiar no-catalana aprenda una nueva lengua, el catalán, sin que, en su proceso de aprendizaje, vea mermado ni el desarrollo de su propia lengua ni su rendimiento académico. El éxito de la inmersión lingüística en las décadas anteriores ha sido frecuentemente utilizado como uno de los argumentos orientativos para justificar la política lingüística que se sigue en la escolarización de la infancia extranjera. Sin embargo, los resultados obtenidos por investigaciones recientes parece que no avalan empíricamente dicho argumento. Este artículo analiza dichos resultados y expone, a partir del Plan para la Lengua y Cohesión Social puesto en marcha por el Departamento de Educación de la Generalitat de Cataluña, cuáles son los retos que se presentan a su Sistema Educativo dentro del nuevo marco que supone el aumento de la diversidad cultural y lingüística en la actual sociedad catalana

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 11:00-11:50 Location: B32/3077 File size: 669 Mb Abstract For good scientific practice, it is important that research results may be properly checked by reviewers and possibly repeated and extended by other researchers. This is of particular interest for "digital science" i.e. for in-silico experiments. In this talk, I'll discuss some issues of how software systems and services may contribute to good scientific practice. Particularly, I'll present our PubFlow approach to automate publication workflows for scientific data. The PubFlow workflow management system is based on established technology. We integrate institutional repository systems (based on EPrints) and world data centers (in marine science). PubFlow collects provenance data automatically via our monitoring framework Kieker. Provenance information describes the origins and the history of scientific data in its life cycle, and the process by which it arrived. Thus, provenance information is highly relevant to repeatability and trustworthiness of scientific results. In our evaluation in marine science, we collaborate with the GEOMAR Helmholtz Centre for Ocean Research Kiel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This seminar is a research discussion around a very interesting problem, which may be a good basis for a WAISfest theme. A little over a year ago Professor Alan Dix came to tell us of his plans for a magnificent adventure:to walk all of the way round Wales - 1000 miles 'Alan Walks Wales'. The walk was a personal journey, but also a technological and community one, exploring the needs of the walker and the people along the way. Whilst walking he recorded his thoughts in an audio diary, took lots of photos, wrote a blog and collected data from the tech instruments he was wearing. As a result Alan has extensive quantitative data (bio-sensing and location) and qualitative data (text, images and some audio). There are challenges in analysing individual kinds of data, including merging similar data streams, entity identification, time-series and textual data mining, dealing with provenance, ontologies for paths, and journeys. There are also challenges for author and third-party annotation, linking the data-sets and visualising the merged narrative or facets of it.