914 resultados para Multiple-scale processing
Resumo:
The deformation behavior of atomically clean, nanometer sized tungsten / gold contacts was studied at room temperature in ultra-high vacuum. An instrument that combines atomic force microscopy (AFM), scanning tunneling microscopy (STM), and field ion microscopy (FIM) into a single experimental apparatus was designed, constructed, and calibrated. A cross-hair force sensor having a spring constant of - 442 N/m was developed and its motion was monitored during indentation experiments with a differential interferometer. Tungsten tips of controlled size (12.8 nm < tip radius < 2 1.6 nm) were first shaped and characterized using FIM and then indented into a Au (1 10) single crystal to depths ranging from 1.5 nrn to 18 nm using the force sensor. Continuum mechanics models were found to be valid in predicting elastic deformation during initial contact and plastic zone depths despite our small size regime. Multiple discrete yielding events lasting < 1.5 ms were observed during the plastic deformation regime; at the yield points a maximum value for the principal shear stress was measured to be 5 + 1 GPa. During tip withdrawal, "pop-out" events relating to material relaxation within the contact were observed. Adhesion between the tip and sample led to experimental signatures that suggest neck formation prior to the break of contact. STM images of indentation holes revealed various shapes that can be attributed to the (1 1 1 ) (1 10) crystallographic slip system in gold. FIM images of the tip after indentation showed no evidence of tip damage
Resumo:
Bone marrow ablation, i.e., the complete sterilization of the active bone marrow, followed by bone marrow transplantation (BMT) is a comment treatment of hematological malignancies. The use of targeted bone-seeking radiopharmaceuticals to selectively deliver radiation to the adjacent bone marrow cavities while sparing normal tissues is a promising technique. Current radiopharmaceutical treatment planning methods do not properly compensate for the patient-specific variable distribution of radioactive material within the skeleton. To improve the current method of internal dosimetry, novel methods for measuring the radiopharmaceutical distribution within the skeleton were developed. 99mTc-MDP was proven as an adequate surrogate for measuring 166Ho-DOTMP skeletal uptake and biodistribution, allowing these measures to be obtained faster, safer, and with higher spatial resolution. This translates directly into better measurements of the radiation dose distribution within the bone marrow. The resulting bone marrow dose-volume histograms allow prediction of the patient disease response where conventional organ scale dosimetry failed. They indicate that complete remission is only achieved when greater than 90% of the bone marrow receives at least 30 Gy. ^ Comprehensive treatment planning requires combining target and non-target organ dosimetry. Organs in the urinary tract were of special concern. The kidney dose is primarily dependent upon the mean transit time of 166 Ho-DOTMP through the kidney. Deconvolution analysis of renograms predicted a mean transit time of 2.6 minutes for 166Ho-DOTMP. The radiation dose to the urinary bladder wall is dependent upon numerous factors including patient hydration and void schedule. For beta-emitting isotopes such as 166Ho, reduction of the bladder wall dose is best accomplished through good patient hydration and ensuring a partially full bladder at the time of injection. Encouraging the patient to void frequently, or catheterizing the patient without irrigation, will not significantly reduce the bladder wall dose. ^ The results from this work will produce the most advanced treatment planning methodology for bone marrow ablation therapy using radioisotopes currently available. Treatments can be tailored specifically for each patient, including the addition of concomitant total body irradiation for patients with unfavorable dose distributions, to deliver a desired patient disease response, while minimizing the dose or toxicity to non-target organs. ^
Resumo:
Ray (1998) developed measures of input- and output-oriented scale efficiency that can be directly computed from an estimated Translog frontier production function. This note extends the earlier results from Ray (1998) to the multiple-output multiple input case.
Resumo:
Multiple holes were cored at Ocean Drilling Program Leg 178 Sites 1098 and 1099 in two subbasins of the Palmer Deep in order to recover complete and continuous records of sedimentation. By correlating measured properties of cores from different holes at a site, we have established a common depth scale, referred to as the meters composite depth scale (mcd), for all cores from Site 1098. For Site 1098, distinct similarities in the magnetic susceptibility records obtained from three holes provide tight constraints on between-hole correlation. Additional constraints come from lithologic features. Specific intervals from other data sets, particularly gamma-ray attenuation bulk density, magnetic intensity, and color reflectance, contain distinctive anomalies that correlate well when placed into the preferred composite depth scale, confirming that the scale is accurate. Coring in two holes at Site 1099 provides only a few meters of overlap. None of the data sets within this limited overlap region provide convincing correlations. Thus, the preferred composite depth scale for Site 1099 is the existing depth scale in meters below seafloor (mbsf).
Resumo:
A composite section, which reconstructs a continuous stratigraphic record from cores of multiple nearby holes, and its associated composite depth scale are important tools for analyzing sediment recovered from a drilling site. However, the standard technique for creating composite depth scales on drilling cruises does not correct for depth distortion within each core. Additionally, the splicing technique used to create composite sections often results in a 10-15% offset between composite depths and measured drill depths. We present a new automated compositing technique that better aligns stratigraphy across holes, corrects depth offsets, and could be performed aboard ship. By analyzing 618 cores from seven Ocean Drilling Program (ODP) sites, we estimate that ?80% of the depth offset in traditional composite depth scales results from core extension during drilling and extraction. Average rates of extension are 12.4 ± 1.5% for calcareous and siliceous cores from ODP Leg 138 and 8.1 ± 1.1% for calcareous and clay-rich cores from ODP Leg 154. Also, average extension decreases as a function of depth in the sediment column, suggesting that elastic rebound is not the dominant extension mechanism.
Resumo:
The calcareous nannofossil assemblages of Ocean Drilling Program Hole 963D from the central Mediterranean Sea have been investigated to document oceanographic changes in surface waters. The studied site is located in an area sensitive to large-scale atmospheric and climatic systems and to high- and low-latitude climate connection. It is characterized by a high sedimentation rate (the achieved mean sampling resolution is <70 years) that allowed the Sicily Channel environmental changes to be examined in great detail over the last 12 ka BP. We focused on the species Florisphaera profunda that lives in the lower photic zone. Its distribution pattern shows repeated abundance fluctuations of about 10-15%. Such variations could be related to different primary production levels, given that the study of the distribution of this species on the Sicily Channel seafloor demonstrates the significant correlation to productivity changes as provided by satellite imagery. Productivity variations were quantitatively estimated and were interpreted on the basis of the relocation of the nutricline within the photic zone, led by the dynamics of the summer thermocline. Productivity changes were compared with oceanographic, atmospheric, and cosmogenic nuclide proxies. The good match with Holocene master records, as with ice-rafted detritus in the subpolar North Atlantic, and the near-1500-year periodicity suggest that the Sicily Channel environment responded to worldwide climate anomalies. Enhanced Northern Hemisphere atmospheric circulation, which has been reported as one of the most important forcing mechanisms for Holocene coolings in previous Mediterranean studies, had a remarkable impact on the water column dynamics of the Sicily Channel.
Resumo:
High-latitude ecosystems play an important role in the global carbon cycle and in regulating the climate system and are presently undergoing rapid environmental change. Accurate land cover data sets are required to both document these changes as well as to provide land-surface information for benchmarking and initializing Earth system models. Earth system models also require specific land cover classification systems based on plant functional types (PFTs), rather than species or ecosystems, and so post-processing of existing land cover data is often required. This study compares over Siberia, multiple land cover data sets against one another and with auxiliary data to identify key uncertainties that contribute to variability in PFT classifications that would introduce errors in Earth system modeling. Land cover classification systems from GLC 2000, GlobCover 2005 and 2009, and MODIS collections 5 and 5.1 are first aggregated to a common legend, and then compared to high-resolution land cover classification systems, vegetation continuous fields (MODIS VCFs) and satellite-derived tree heights (to discriminate against sparse, shrub, and forest vegetation). The GlobCover data set, with a lower threshold for tree cover and taller tree heights and a better spatial resolution, tends to have better distributions of tree cover compared to high-resolution data. It has therefore been chosen to build new PFT maps for the ORCHIDEE land surface model at 1 km scale. Compared to the original PFT data set, the new PFT maps based on GlobCover 2005 and an updated cross-walking approach mainly differ in the characterization of forests and degree of tree cover. The partition of grasslands and bare soils now appears more realistic compared with ground truth data. This new vegetation map provides a framework for further development of new PFTs in the ORCHIDEE model like shrubs, lichens and mosses, to represent the water and carbon cycles in northern latitudes better. Updated land cover data sets are critical for improving and maintaining the relevance of Earth system models for assessing climate and human impacts on biogeochemistry and biophysics.
Resumo:
The spatial and temporal dynamics of seagrasses have been studied from the leaf to patch (100 m**2) scales. However, landscape scale (> 100 km**2) seagrass population dynamics are unresolved in seagrass ecology. Previous remote sensing approaches have lacked the temporal or spatial resolution, or ecologically appropriate mapping, to fully address this issue. This paper presents a robust, semi-automated object-based image analysis approach for mapping dominant seagrass species, percentage cover and above ground biomass using a time series of field data and coincident high spatial resolution satellite imagery. The study area was a 142 km**2 shallow, clear water seagrass habitat (the Eastern Banks, Moreton Bay, Australia). Nine data sets acquired between 2004 and 2013 were used to create seagrass species and percentage cover maps through the integration of seagrass photo transect field data, and atmospherically and geometrically corrected high spatial resolution satellite image data (WorldView-2, IKONOS and Quickbird-2) using an object based image analysis approach. Biomass maps were derived using empirical models trained with in-situ above ground biomass data per seagrass species. Maps and summary plots identified inter- and intra-annual variation of seagrass species composition, percentage cover level and above ground biomass. The methods provide a rigorous approach for field and image data collection and pre-processing, a semi-automated approach to extract seagrass species and cover maps and assess accuracy, and the subsequent empirical modelling of seagrass biomass. The resultant maps provide a fundamental data set for understanding landscape scale seagrass dynamics in a shallow water environment. Our findings provide proof of concept for the use of time-series analysis of remotely sensed seagrass products for use in seagrass ecology and management.
Resumo:
For the qualitative description of surface properties like vegetation cover or land-water-ratio of Samoylov Island as well as for the evaluation of fetch homogeneity considerations of the eddy covariance measurements and for the up-scaling of chamber flux measurements, a detailed surface classification of the island at the sub-polygonal scale is necessary. However, up to know only grey-scale Corona satellite images from the 1960s with a resolution of 2 x 2 m and recent multi-spectral LandSat images with a resolution of 30 x 30 m were available for this region. Both are not useable for the desired classification because of missing spectral information and inadequate resolution, respectively. During the Lena 2003 expedition, a survey of the island by air photography was carried out in order to obtain images for surface classification. The photographs were taken from a helicopter on 10.07.2002, using a Canon EOS100 reflex camera, a Soligor 19-23 mm lens and colour slide film. The height from which the photographs were taken was approximately 600 meters. Due to limited flight time, not all the area of the island could be photographed and some regions could only be photographed with a slanted view. As a result, the images are of a varying quality and resolution. In Potsdam, after processing the films were scanned using a Nikon LS-2000 scanner at maximal resolution setting. This resulted in a ground resolution of the scanned images of approximately 0.3x0.3 m. The images were subsequently geo-referenced using the ENVI software and a referenced Corona image dating from 18.07.1964 (Spott, 2003). Geo-referencing was only possible for the Holocene river terrace areas; the floodplain regions in the western part of the island could not be referenced due to the lack of ground reference points. In Figure 3.7-1, the aerial view of Samoylov Island composed of the geo-referenced images is shown. Further work is necessary for the classification and interpretation of the images. If possible, air photography surveys will be carried out during future expeditions in order to determine changes in surface pattern and composition.
Resumo:
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.
Resumo:
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
Abstract The creation of atlases, or digital models where information from different subjects can be combined, is a field of increasing interest in biomedical imaging. When a single image does not contain enough information to appropriately describe the organism under study, it is then necessary to acquire images of several individuals, each of them containing complementary data with respect to the rest of the components in the cohort. This approach allows creating digital prototypes, ranging from anatomical atlases of human patients and organs, obtained for instance from Magnetic Resonance Imaging, to gene expression cartographies of embryo development, typically achieved from Light Microscopy. Within such context, in this PhD Thesis we propose, develop and validate new dedicated image processing methodologies that, based on image registration techniques, bring information from multiple individuals into alignment within a single digital atlas model. We also elaborate a dedicated software visualization platform to explore the resulting wealth of multi-dimensional data and novel analysis algo-rithms to automatically mine the generated resource in search of bio¬logical insights. In particular, this work focuses on gene expression data from developing zebrafish embryos imaged at the cellular resolution level with Two-Photon Laser Scanning Microscopy. Disposing of quantitative measurements relating multiple gene expressions to cell position and their evolution in time is a fundamental prerequisite to understand embryogenesis multi-scale processes. However, the number of gene expressions that can be simultaneously stained in one acquisition is limited due to optical and labeling constraints. These limitations motivate the implementation of atlasing strategies that can recreate a virtual gene expression multiplex. The developed computational tools have been tested in two different scenarios. The first one is the early zebrafish embryogenesis where the resulting atlas constitutes a link between the phenotype and the genotype at the cellular level. The second one is the late zebrafish brain where the resulting atlas allows studies relating gene expression to brain regionalization and neurogenesis. The proposed computational frameworks have been adapted to the requirements of both scenarios, such as the integration of partial views of the embryo into a whole embryo model with cellular resolution or the registration of anatom¬ical traits with deformable transformation models non-dependent on any specific labeling. The software implementation of the atlas generation tool (Match-IT) and the visualization platform (Atlas-IT) together with the gene expression atlas resources developed in this Thesis are to be made freely available to the scientific community. Lastly, a novel proof-of-concept experiment integrates for the first time 3D gene expression atlas resources with cell lineages extracted from live embryos, opening up the door to correlate genetic and cellular spatio-temporal dynamics. La creación de atlas, o modelos digitales, donde la información de distintos sujetos puede ser combinada, es un campo de creciente interés en imagen biomédica. Cuando una sola imagen no contiene suficientes datos como para describir apropiadamente el organismo objeto de estudio, se hace necesario adquirir imágenes de varios individuos, cada una de las cuales contiene información complementaria respecto al resto de componentes del grupo. De este modo, es posible crear prototipos digitales, que pueden ir desde atlas anatómicos de órganos y pacientes humanos, adquiridos por ejemplo mediante Resonancia Magnética, hasta cartografías de la expresión genética del desarrollo de embrionario, típicamente adquiridas mediante Microscopía Optica. Dentro de este contexto, en esta Tesis Doctoral se introducen, desarrollan y validan nuevos métodos de procesado de imagen que, basándose en técnicas de registro de imagen, son capaces de alinear imágenes y datos provenientes de múltiples individuos en un solo atlas digital. Además, se ha elaborado una plataforma de visualization específicamente diseñada para explorar la gran cantidad de datos, caracterizados por su multi-dimensionalidad, que resulta de estos métodos. Asimismo, se han propuesto novedosos algoritmos de análisis y minería de datos que permiten inspeccionar automáticamente los atlas generados en busca de conclusiones biológicas significativas. En particular, este trabajo se centra en datos de expresión genética del desarrollo embrionario del pez cebra, adquiridos mediante Microscopía dos fotones con resolución celular. Disponer de medidas cuantitativas que relacionen estas expresiones genéticas con las posiciones celulares y su evolución en el tiempo es un prerrequisito fundamental para comprender los procesos multi-escala característicos de la morfogénesis. Sin embargo, el número de expresiones genéticos que pueden ser simultáneamente etiquetados en una sola adquisición es reducido debido a limitaciones tanto ópticas como del etiquetado. Estas limitaciones requieren la implementación de estrategias de creación de atlas que puedan recrear un multiplexado virtual de expresiones genéticas. Las herramientas computacionales desarrolladas han sido validadas en dos escenarios distintos. El primer escenario es el desarrollo embrionario temprano del pez cebra, donde el atlas resultante permite constituir un vínculo, a nivel celular, entre el fenotipo y el genotipo de este organismo modelo. El segundo escenario corresponde a estadios tardíos del desarrollo del cerebro del pez cebra, donde el atlas resultante permite relacionar expresiones genéticas con la regionalización del cerebro y la formación de neuronas. La plataforma computacional desarrollada ha sido adaptada a los requisitos y retos planteados en ambos escenarios, como la integración, a resolución celular, de vistas parciales dentro de un modelo consistente en un embrión completo, o el alineamiento entre estructuras de referencia anatómica equivalentes, logrado mediante el uso de modelos de transformación deformables que no requieren ningún marcador específico. Está previsto poner a disposición de la comunidad científica tanto la herramienta de generación de atlas (Match-IT), como su plataforma de visualización (Atlas-IT), así como las bases de datos de expresión genética creadas a partir de estas herramientas. Por último, dentro de la presente Tesis Doctoral, se ha incluido una prueba conceptual innovadora que permite integrar los mencionados atlas de expresión genética tridimensionales dentro del linaje celular extraído de una adquisición in vivo de un embrión. Esta prueba conceptual abre la puerta a la posibilidad de correlar, por primera vez, las dinámicas espacio-temporales de genes y células.