933 resultados para abstract data type
Resumo:
A spatial object consists of data assigned to points in a space. Spatial objects, such as memory states and three dimensional graphical scenes, are diverse and ubiquitous in computing. We develop a general theory of spatial objects by modelling abstract data types of spatial objects as topological algebras of functions. One useful algebra is that of continuous functions, with operations derived from operations on space and data, and equipped with the compact-open topology. Terms are used as abstract syntax for defining spatial objects and conditional equational specifications are used for reasoning. We pose a completeness problem: Given a selection of operations on spatial objects, do the terms approximate all the spatial objects to arbitrary accuracy? We give some general methods for solving the problem and consider their application to spatial objects with real number attributes. © 2011 British Computer Society.
Extracting S-matrix poles for resonances from numerical scattering data: Type-II Pade reconstruction
Resumo:
We present a FORTRAN 77 code for evaluation of resonance pole positions and residues of a numerical scattering matrix element in the complex energy (CE) as well as in the complex angular momentum (CAM) planes. Analytical continuation of the S-matrix element is performed by constructing a type-II Pade approximant from given physical values (Bessis et al. (1994) [421: Vrinceanu et al. (2000) [24]; Sokolovski and Msezane (2004) [23]). The algorithm involves iterative 'preconditioning' of the numerical data by extracting its rapidly oscillating potential phase component. The code has the capability of adding non-analytical noise to the numerical data in order to select 'true' physical poles, investigate their stability and evaluate the accuracy of the reconstruction. It has an option of employing multiple-precision (MPFUN) package (Bailey (1993) [451) developed by D.H. Bailey wherever double precision calculations fail due to a large number of input partial waves (energies) involved. The code has been successfully tested on several models, as well as the F + H-2 -> HE + H, F + HD : HE + D, Cl + HCI CIH + Cl and H + D-2 -> HD + D reactions. Some detailed examples are given in the text.
Resumo:
In this paper we continue our investigation into the development of computational-science software based on the identification and formal specification of Abstract Data Types (ADTs) and their implementation in Fortran 90. In particular, we consider the consequences of using pointers when implementing a formally specified ADT in Fortran 90. Our aim is to highlight the resulting conflict between the goal of information hiding, which is central to the ADT methodology, and the space efficiency of the implementation. We show that the issue of storage recovery cannot be avoided by the ADT user, and present a range of implementations of a simple ADT to illustrate various approaches towards satisfactory storage management. Finally, we propose a set of guidelines for implementing ADTs using pointers in Fortran 90. These guidelines offer a way gracefully to provide disposal operations in Fortran 90. Such an approach is desirable since Fortran 90 does not provide automatic garbage collection which is offered by many object-oriented languages including Eiffel, Java, Smalltalk, and Simula.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
E-learning is supposing an innovation in teaching, raising from the development of new technologies. It is based in a set of educational resources, including, among others, multimedia or interactive contents accessible through Internet or Intranet networks. A whole spectrum of tools and services support e-learning, some of them include auto-evaluation and automated correction of test-like exercises, however, this sort of exercises are very constrained because of its nature: fixed contents and correct answers suppose a limit in the way teachers may evaluation students. In this paper we propose a new engine that allows validating complex exercises in the area of Data Structures and Algorithms. Correct solutions to exercises do not rely only in how good the execution of the code is, or if the results are same as expected. A set of criteria on algorithm complexity or correctness in the use of the data structures are required. The engine presented in this work covers a wide set of exercises with these characteristics allowing teachers to establish the set of requirements for a solution, and students to obtain a measure on the quality of their solution in the same terms that are later required for exams.
Resumo:
Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.
Resumo:
Data in germplasm collections contain a mixture of data types; binary, multistate and quantitative. Given the multivariate nature of these data, the pattern analysis methods of classification and ordination have been identified as suitable techniques for statistically evaluating the available diversity. The proximity (or resemblance) measure, which is in part the basis of the complementary nature of classification and ordination techniques, is often specific to particular data types. The use of a combined resemblance matrix has an advantage over data type specific proximity measures. This measure accommodates the different data types without manipulating them to be of a specific type. Descriptors are partitioned into their data types and an appropriate proximity measure is used on each. The separate proximity matrices, after range standardisation, are added as a weighted average and the combined resemblance matrix is then used for classification and ordination. Germplasm evaluation data for 831 accessions of groundnut (Arachis hypogaea L.) from the Australian Tropical Field Crops Genetic Resource Centre, Biloela, Queensland were examined. Data for four binary, five ordered multistate and seven quantitative descriptors have been documented. The interpretative value of different weightings - equal and unequal weighting of data types to obtain a combined resemblance matrix - was investigated by using principal co-ordinate analysis (ordination) and hierarchical cluster analysis. Equal weighting of data types was found to be more valuable for these data as the results provided a greater insight into the patterns of variability available in the Australian groundnut germplasm collection. The complementary nature of pattern analysis techniques enables plant breeders to identify relevant accessions in relation to the descriptors which distinguish amongst them. This additional information may provide plant breeders with a more defined entry point into the germplasm collection for identifying sources of variability for their plant improvement program, thus improving the utilisation of germplasm resources.
Resumo:
Background Traffic offences have been considered an important predictor of crash involvement, and have often been used as a proxy safety variable for crashes. However the association between crashes and offences has never been meta-analysed and the population effect size never established. Research is yet to determine the extent to which this relationship may be spuriously inflated through systematic measurement error, with obvious implications for researchers endeavouring to accurately identify salient factors predictive of crashes. Methodology and Principal Findings Studies yielding a correlation between crashes and traffic offences were collated and a meta-analysis of 144 effects drawn from 99 road safety studies conducted. Potential impact of factors such as age, time period, crash and offence rates, crash severity and data type, sourced from either self-report surveys or archival records, were considered and discussed. After weighting for sample size, an average correlation of r = .18 was observed over the mean time period of 3.2 years. Evidence emerged suggesting the strength of this correlation is decreasing over time. Stronger correlations between crashes and offences were generally found in studies involving younger drivers. Consistent with common method variance effects, a within country analysis found stronger effect sizes in self-reported data even controlling for crash mean. Significance The effectiveness of traffic offences as a proxy for crashes may be limited. Inclusion of elements such as independently validated crash and offence histories or accurate measures of exposure to the road would facilitate a better understanding of the factors that influence crash involvement.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Data were extracted from the U.S. Navy Fleet Numerical Oceanographic Center Master Oceanographic Observation Data Set for a 200 km to 300 km wide coastal strip on the west coast of the United States. These data were averaged for the September through February (winter) and March through August (summer) intervals. The resulting winter temperature anomaly values show the El Nino signal in the CCC [Coastal California Current] as positive temperature anomalies from the surface to at least 300 m.