911 resultados para Minimal sets
Resumo:
Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to ~54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 {plus minus} 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 {plus minus} 0.05 Ma for the early Eocene ash -17, and 65.250 {plus minus} 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radiometric geochronology is much more challenging than previously thought.
Resumo:
The software PanGet is a special tool for the download of multiple data sets from PANGAEA. It uses the PANGAEA data set ID which is unique and part of the DOI. In a first step a list of ID's of those data sets to be downloaded must be created. There are two choices to define this individual collection of sets. Based on the ID list, the tool will download the data sets. Failed downloads are written to the file *_failed.txt. The functionality of PanGet is also part of the program Pan2Applic (choose File > Download PANGAEA datasets...) and PanTool2 (choose Basic tools > Download PANGAEA datasets...).
Resumo:
Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.
Resumo:
Trillas et al. (1999, Soft computing, 3 (4), 197–199) and Trillas and Cubillo (1999, On non-contradictory input/output couples in Zadeh's CRI proceeding, 28–32) introduced the study of contradiction in the framework of fuzzy logic because of the significance of avoiding contradictory outputs in inference processes. Later, the study of contradiction in the framework of Atanassov's intuitionistic fuzzy sets (A-IFSs) was initiated by Cubillo and Castiñeira (2004, Contradiction in intuitionistic fuzzy sets proceeding, 2180–2186). The axiomatic definition of contradiction measure was stated in Castiñeira and Cubillo (2009, International journal of intelligent systems, 24, 863–888). Likewise, the concept of continuity of these measures was formalized through several axioms. To be precise, they defined continuity when the sets ‘are increasing’, denominated continuity from below, and continuity when the sets ‘are decreasing’, or continuity from above. The aim of this paper is to provide some geometrical construction methods for obtaining contradiction measures in the framework of A-IFSs and to study what continuity properties these measures satisfy. Furthermore, we show the geometrical interpretations motivating the measures.
Resumo:
In this paper, we commence the study of the so called supplementarity measures. They are introduced axiomatically and are then related to incompatibility measures by antonyms. To do this, we have to establish what we mean by antonymous measure. We then prove that, under certain conditions, supplementarity and incompatibility measuresare antonymous. Besides, with the aim of constructing antonymous measures, we introduce the concept of involution on the set made up of all the ordered pairs of fuzzy sets. Finally, we obtain some antonymous supplementarity measures from incompatibility measures by means of involutions.
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
En este trabajo se da un ejemplo de un conjunto de n puntos situados en posición general, en el que se alcanza el mínimo número de puntos que pueden formar parte de algún k-set para todo k con 1menor que=kmenor quen/2. Se generaliza también, a puntos en posición no general, el resultado de Erdõs et al., 1973, sobre el mínimo número de puntos que pueden formar parte de algún k-set. The study of k- sets is a very relevant topic in the research area of computational geometry. The study of the maximum and minimum number of k-sets in sets of points of the plane in general position, specifically, has been developed at great length in the literature. With respect to the maximum number of k-sets, lower bounds for this maximum have been provided by Erdõs et al., Edelsbrunner and Welzl, and later by Toth. Dey also stated an upper bound for this maximum number of k-sets. With respect to the minimum number of k-set, this has been stated by Erdos el al. and, independently, by Lovasz et al. In this paper the authors give an example of a set of n points in the plane in general position (no three collinear), in which the minimum number of points that can take part in, at least, a k-set is attained for every k with 1 ≤ k < n/2. The authors also extend Erdos’s result about the minimum number of points in general position which can take part in a k-set to a set of n points not necessarily in general position. That is why this work complements the classic works we have mentioned before.
Resumo:
In this work, a new two-dimensional optics design method is proposed that enables the coupling of three ray sets with two lens surfaces. The method is especially important for optical systems designed for wide field of view and with clearly separated optical surfaces. Fermat’s principle is used to deduce a set of functional differential equations fully describing the entire optical system. The presented general analytic solution makes it possible to calculate the lens profiles. Ray tracing results for calculated 15th order Taylor polynomials describing the lens profiles demonstrate excellent imaging performance and the versatility of this new analytic design method.
Resumo:
The two-dimensional analytic optics design method presented in a previous paper [Opt. Express 20, 5576–5585 (2012)] is extended in this work to the three-dimensional case, enabling the coupling of three ray sets with two free-form lens surfaces. Fermat’s principle is used to deduce additional sets of functional differential equations which make it possible to calculate the lens surfaces. Ray tracing simulations demonstrate the excellent imaging performance of the resulting free-form lenses described by more than 100 coefficients.