895 resultados para Concurrent Java components
Resumo:
BACKGROUND: Clinical disorders often share common symptoms and aetiological factors. Bifactor models acknowledge the role of an underlying general distress component and more specific sub-domains of psychopathology which specify the unique components of disorders over and above a general factor. METHODS: A bifactor model jointly calibrated data on subjective distress from The Mood and Feelings Questionnaire and the Revised Children's Manifest Anxiety Scale. The bifactor model encompassed a general distress factor, and specific factors for (a) hopelessness-suicidal ideation, (b) generalised worrying and (c) restlessness-fatigue at age 14 which were related to lifetime clinical diagnoses established by interviews at ages 14 (concurrent validity) and current diagnoses at 17 years (predictive validity) in a British population sample of 1159 adolescents. RESULTS: Diagnostic interviews confirmed the validity of a symptom-level bifactor model. The underlying general distress factor was a powerful but non-specific predictor of affective, anxiety and behaviour disorders. The specific factors for hopelessness-suicidal ideation and generalised worrying contributed to predictive specificity. Hopelessness-suicidal ideation predicted concurrent and future affective disorder; generalised worrying predicted concurrent and future anxiety, specifically concurrent generalised anxiety disorders. Generalised worrying was negatively associated with behaviour disorders. LIMITATIONS: The analyses of gender differences and the prediction of specific disorders was limited due to a low frequency of disorders other than depression. CONCLUSIONS: The bifactor model was able to differentiate concurrent and predict future clinical diagnoses. This can inform the development of targeted as well as non-specific interventions for prevention and treatment of different disorders.
Resumo:
We present paleomagnetic data from basaltic pillow and lava flows drilled at four Ocean Drilling Program (ODP) Leg 192 sites through the Early Cretaceous (~120 Ma) Ontong Java Plateau (OJP). Altogether 270 samples (out of 331) yielded well-defined characteristic remanent magnetization components all of which have negative inclinations, i.e. normal polarity. Dividing data into inclination groups we obtain 5, 7, 14 and 15 independent inclination estimates for the four sites. Statistical analysis suggests that paleosecular variation has been sufficiently sampled and site-mean inclinations therefore represent time-averaged fields. Of particular importance is the finding that all four site-mean inclinations are statistically indistinguishable, strongly supporting indirect seismic observation from the flat-lying sediments blanketing the OJP that the studied basalts have suffered little or no tectonic disturbance since their emplacement. Moreover, the corresponding paleomagnetic paleolatitudes agree excellently with paleomagnetic data from a previous ODP site (Site 807) drilled into the northern portion of the OJP. Two important conclusions can be drawn based on the presented dataset: (i) the Leg 192 combined mean inclination (Inc.=-41.4°, N=41, kappa= 66.0, alpha95 =2.6°) is inconsistent with the Early Cretaceous part of the Pacific apparent polar wander path, indicating that previous paleomagnetic poles derived mainly from seamount magnetic anomaly modeling must be used with care; (ii) the Leg 192 paleomagnetic paleolatitude for the central OJP is ~20° north of the paleogeographic location calculated from Pacific hotspot tracks assuming the hotspots have remained fixed. The difference between paleomagnetic and hotspot calculated paleolatitudes cannot be explained by true polar wander estimates derived from other lithospheric plates and our results are therefore consistent with and extend recent paleomagnetic studies of younger hotspot features in the northern Pacific Ocean that suggest Late Cretaceous to Eocene motion of Pacific hotspots.
Resumo:
The record of eolian deposition on the Ontong Java Plateau (OJP) since the Oligocene (approximately 33 Ma) has been investigated using dust grain size, dust flux, and dust mineralogy, with the goal of interpreting the paleoclimatology and paleometeorology of the western equatorial Pacific. Studies of modern dust dispersal in the Pacific have indicated that the equatorial regions receive contributions from both the Northern Hemisphere westerly winds and the equatorial easterlies; limited meteorological data suggest that low-altitude westerlies could also transport dust to OJP from proximal sources in the western Pacific. Previous studies have established the characteristics of the grain-size, flux, and mineralogy records of dust deposited in the North Pacific by the mid-latitude westerlies and in the eastern equatorial Pacific by the low-latitude easterlies since the Oligocene. By comparing the OJP records with the well-defined records of the mid-latitude westerlies and the low-latitude easterlies, the importance of multiple sources of dust to OJP can be recognized. OJP dust is composed of quartz, illite, kaolinite/chlorite, plagioclase feldspar, smectite, and heulandite. Mineral abundance profiles and principal components analysis (PCA) of the mineral abundance data have been used to identify assemblages of minerals that covary through all or part of the OJP record. Abundances of quartz, illite, and kaolinite/chlorite covary throughout the interval studied, defining a mineralogical assemblage supplied from Asia. Some plagioclase and smectite were also supplied as part of this assemblage during the late Miocene and Pliocene/Pleistocene, but other source areas have supplied significant amounts of plagioclase, smectite, and heulandite to OJP since the Oligocene. OJP dust is generally coarser than dust deposited by the Northern Hemisphere westerlies or the equatorial easterlies, and it accumulates more rapidly by 1-2 orders of magnitude. These relationships indicate the importance of the local sources on dust deposition at OJP. The grain-size and flux records of OJP dust do not exhibit most of the events observed in the corresponding records of the Northern Hemisphere westerlies or the equatorial easterlies, because these features are masked by the mixing of dust from several sources at OJP. The abundance record of the Asian dust assemblage at OJP, however, does contain most of the features characteristic of dust flux by means of the Northern Hemisphere westerlies, indicating that the paleoclimatic and paleometeorologic signal of a particular source area and wind system can be preserved in areas well beyond the region dominated by that source and those winds. Identifying such a signal requires "unmixing" the various dust assemblages, which can be accomplished by combining grain-size, flux, and mineralogic data.
Resumo:
This petrological study of the lower Aptian Oceanic Anoxic Event (OAE1a) focused on the nature of the organic-rich interval as well as the tuffaceous units above and below it. The volcaniclastic debris deposited just prior to the OAE1a is consistent with reactivation of volcanic centers across the Shatsky Rise, concurrent with volcanism on the Ontong Java Plateau. This reactivation may have been responsible for the sub-OAE1a unconformity. Soon after this volcanic pulse, anomalous amounts of organic matter accumulated on the rise, forming a black shale horizon. The complex textures in the organic-rich intervals suggest a history of periodic anoxia, overprinted by bioturbation. Components include pellets, radiolarians, and fish debris. The presence of carbonate-cemented radiolarite under the OAE1a intervals suggests that there has been large-scale remobilization of carbonate in the system, which in turn may explain the absence of calcareous microfossils in the section. The volcanic debris in the overlying tuffaceous interval differs in that it is significantly epiclastic and glauconitic. It was likely derived from an emergent volcanic edifice.
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code.
Resumo:
The commonly accepted approach to specifying libraries of concurrent algorithms is a library abstraction. Its idea is to relate a library to another one that abstracts away from details of its implementation and is simpler to reason about. A library abstraction relation has to validate the Abstraction Theorem: while proving a property of the client of the concurrent library, the library can be soundly replaced with its abstract implementation. Typically a library abstraction relation, such as linearizability, assumes a complete information hiding between a library and its client, which disallows them to communicate by means of shared memory. However, such way of communication may be used in a program, and correctness of interactions on a shared memory depends on the implicit contract between the library and the client. In this work we approach library abstraction without any assumptions about information hiding. To be able to formulate the contract between components of the program, we augment machine states of the program with two abstract states, views, of the client and the library. It enables formalising the contract with the internal safety, which requires components to preserve each other's views whenever their command is executed. We define the library a a correspondence between possible uses of a concrete and an abstract library. For our library abstraction relation and traces of a program, components of which follow their contract, we prove an Abstraction Theorem. RESUMEN. La técnica más aceptada actualmente para la especificación de librerías de algoritmos concurrentes es la abstracción de librerías (library abstraction). La idea subyacente es relacionar la librería original con otra que abstrae los detalles de implementación y conóon que describa dicha abstracción de librerías debe validar el Teorema de Abstracción: durante la prueba de la validez de una propiedad del cliente de la librería concurrente, el reemplazo de esta última por su implementación abstracta es lógicamente correcto. Usualmente, una relación de abstracción de librerías como la linearizabilidad (linearizability), tiene como premisa el ocultamiento de información entre el cliente y la librería (information hiding), es decir, que no se les permite comunicarse mediante la memoria compartida. Sin embargo, dicha comunicación ocurre en la práctica y la correctitud de estas interacciones en una memoria compartida depende de un contrato implícito entre la librería y el cliente. En este trabajo, se propone un nueva definición del concepto de abtracción de librerías que no presupone un ocultamiento de información entre la librería y el cliente. Con el fin de establecer un contrato entre diferentes componentes de un programa, extendemos la máquina de estados subyacente con dos estados abstractos que representan las vistas del cliente y la librería. Esto permite la formalización de la propiedad de seguridad interna (internal safety), que requiere que cada componente preserva la vista del otro durante la ejecuci on de un comando. Consecuentemente, se define la relación de abstracción de librerías mediante una correspondencia entre los usos posibles de una librería abstracta y una concreta. Finalmente, se prueba el Teorema de Abstracción para la relación de abstracción de librerías propuesta, para cualquier traza de un programa y cualquier componente que satisface los contratos apropiados.
Resumo:
We define a language and a predicative semantics to model concurrent real-time programs. We consider different communication paradigms between the concurrent components of a program: communication via shared variables and asynchronous message passing (for different models of channels). The semantics is the basis for a refinement calculus to derive machine-independent concurrent real-time programs from specifications. We give some examples of refinement laws that deal with concurrency.
Resumo:
Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.
Resumo:
A sudden increase in the amplitude of a component often causes its segregation from a complex tone, and shorter rise times enhance this effect. We explored whether this also occurs in implant listeners (n?=?8). Condition 1 used a 3.5-s “complex tone” comprising concurrent stimulation on five electrodes distributed across the array of the Nucleus CI24 implant. For each listener, the baseline stimulus level on each electrode was set at 50% of the dynamic range (DR). Two 1-s increments of 12.5%, 25%, or 50% DR were introduced in succession on adjacent electrodes within the “inner” three of those activated. Both increments had rise and fall times of 30 and 970 ms or vice versa. Listeners reported which increment was higher in pitch. Some listeners performed above chance for all increment sizes, but only for 50% increments did all listeners perform above chance. No significant effect of rise time was found. Condition 2 replaced amplitude increments with decrements. Only three listeners performed above chance even for 50% decrements. One exceptional listener performed well for 50% decrements with fall and rise times of 970 and 30 ms but around chance for fall and rise times of 30 and 970 ms, indicating successful discrimination based on a sudden rise back to baseline stimulation. Overall, the results suggest that implant listeners can use amplitude changes against a constant background to pick out components from a complex, but generally these must be large compared with those required in normal hearing. For increments, performance depended mainly on above-baseline stimulation of the target electrodes, not rise time. With one exception, performance for decrements was typically very poor.
Resumo:
A novel two-box model for joint compensation of nonlinear distortion introduced from both in-phase/quadrature modulator and power amplifier is proposed for concurrent dual-band wireless transmitters. Compensation of nonlinear distortion is accomplished in two phases, where phases are identified separately. It is shown that complexity of the digital predistortion is reduced. The performance of the proposed model is evaluated in terms of ACPR, EVM and NMSE improvements using 1.4 MHz LTE and WCDMA signals.