993 resultados para direct mapping


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose in this paper a new method for the mapping of hippocampal (HC) surfaces to establish correspondences between points on HC surfaces and enable localized HC shape analysis. A novel geometric feature, the intrinsic shape context, is defined to capture the global characteristics of the HC shapes. Based on this intrinsic feature, an automatic algorithm is developed to detect a set of landmark curves that are stable across population. The direct map between a source and target HC surface is then solved as the minimizer of a harmonic energy function defined on the source surface with landmark constraints. For numerical solutions, we compute the map with the approach of solving partial differential equations on implicit surfaces. The direct mapping method has the following properties: (1) it has the advantage of being automatic; (2) it is invariant to the pose of HC shapes. In our experiments, we apply the direct mapping method to study temporal changes of HC asymmetry in Alzheimer's disease (AD) using HC surfaces from 12 AD patients and 14 normal controls. Our results show that the AD group has a different trend in temporal changes of HC asymmetry than the group of normal controls. We also demonstrate the flexibility of the direct mapping method by applying it to construct spherical maps of HC surfaces. Spherical harmonics (SPHARM) analysis is then applied and it confirms our results on temporal changes of HC asymmetry in AD.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A modular image capture system with close integration to CCD cameras has been developed. The aim is to produce a system capable of integrating CCD sensor, image capture and image processing into a single compact unit. This close integration provides a direct mapping between CCD pixels and digital image pixels. The system has been interfaced to a digital signal processor board for the development and control of image processing tasks. These have included characterization and enhancement of noisy images from an intensified camera and measurement to subpixel resolutions. A highly compact form of the image capture system is in an advanced stage of development. This consists of a single FPGA device and a single VRAM providing a two chip image capturing system capable of being integrated into a CCD camera. A miniature compact PC has been developed using a novel modular interconnection technique, providing a processing unit in a three dimensional format highly suited to integration into a CCD camera unit. Work is under way to interface the compact capture system to the PC using this interconnection technique, combining CCD sensor, image capture and image processing into a single compact unit. ©2005 Copyright SPIE - The International Society for Optical Engineering.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is easy to find that, in each language, the terms and phrases for the representation of spatial locating and orientation, and the ways for sharing spatial knowledge are very rich. The basic way of sharing spatial information is mapping our experience and actions with the environment by using terms and utterances that represent spatial relations. How to build the mapping relation among them and what factors affect the process of mapping are the questions need to be answered in this study. The whole course of expressing projective spatial relation includes the verbal expression and perception to the projective spatial relation. In experiment 1, the perceptual characteristics of perceiving the projective spatial relation was studied by analyzing the production latencies from the presentation of the stimulators in different directions (at 5 levels: 00, 22.50, 450, 67.50, and 900) to the onset of the corresponding buttons triggering on the keyboard, the study verifies the results of prior researches and revealed the foundation of expressing the projective spatial relation. In the experiment 2, and 3, the way and the role of the verbal expression were investigated. Subjects were asked to speak out the spatial relation between intended object and reference object by using verbal locative expressions. In experiment 2, Chinese was used as the verbal expression way, and in Experiment 3, English instead. Experiment 4 was similar as experiment 3, but time of voice key triggering was controlled and balanced among trials to verify the results of Experiment 3 further. Experiment 5 investigated the effect of pre-cue on the courses of expressing projective spatial relation. There were two kinds of clues, one was the spatial locative utterances, and the other was the perceptual coordinates framework, such as drawing a cross ”+” in a circle to imply four quadrants. The main conclusions of this research were as follows: 1. When speaking out a spatial relation, different sets of spatial terms, such as “left and right”, or “north and south”, affected the speed of verbal expression. Verbal coding process was affected by how well the perceptual salient direction matched with spatial terms, which made the speed of verbal expression different. 2. When using composite spatial terms to express diagonal directions, people tend to use direct mapping from spatial conceptual representation to composite spatial terms, rather than combining the two axes, which implied there existed direct one-on-one mapping between spatial conceptual representation and spatial terms. But during specific developing period, the way of combining two axes was employed as well for spatial expression, which meant perceptual salient directions played critical role in the process of perceiving and expressing projective spatial relations. 3. The process of verbal expression of the projective spatial relation was improved by the familiarity of spatial utterances, but this improvement was not the results of enhancement of the effect of prototypical diagonal direction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Healthy and Biologically Diverse Seas Evidence Group (HBDSEG) has been tasked with providing the technical advice for the implementation of the Marine Strategy Framework Directive (MSFD) with respect to descriptors linked to biodiversity. A workshop was held in London to address one of the Research and Development (R&D) proposals entitled: ‘Mapping the extent and distribution of habitats using acoustic and remote techniques, relevant to indicators for area/extent/habitat loss.’ The aim of the workshop was to identify, define and assess the feasibility of potential indicators of benthic habitat distribution and extent, and identify the R&D work which could be required to fully develop these indicators. The main points that came out of the workshop were: (i) There are many technical aspects of marine habitat mapping that still need to be resolved if cost-effective spatial indicators are to be developed. Many of the technical aspects that need addressing surround issues of consistency, confidence and repeatability. These areas should be tackled by the JNCC Habitat Mapping and Classification Working Group and the HBDSEG Seabed Mapping Working Group. (ii) There is a need for benthic ecologists (through the HBDSEG Benthic Habitats Subgroup and the JNCC Marine Indicators Group) to finalise the list of habitats for which extent and/or distribution indicators should be considered for development, building upon the recommendations from this report. When reviewing the list of indicators, benthic habitats could also be distinguished into those habitats that are defined/determined primarily by physical parameters (although including biological assemblages) (e.g. subtidal shallow sand) and those defined primarily by their biological assemblage (e.g. seagrass beds). This distinction is important as some anthropogenic pressures may influence the biological component of the ecosystem despite not having a quantifiable effect on the physical habitat distribution/extent. (iii) The scale and variety of UK benthic habitats makes any attempt to undertake comprehensive direct mapping exercises prohibitively expensive (especially where there is a need for repeat surveys for assessment). There is a clear need therefore to develop a risk-based approach that uses indirect indicators (e.g. modelling), such as habitats at risk from pressures caused by current human activities, to develop priorities for information gathering. The next steps that came out of the workshop were: (i) A combined approach should be developed by the JNCC Marine Indicators Group together with the HBDSEG Benthic Habitats Subgroup, which will compile and ultimately synthesise all the criteria used by the three different groups from the workshop. The agreed combined approach will be used to undertake a final review of the habitats considered during the workshop, and to evaluate any remaining habitats in order to produce a list of habitats for indicator development for which extent and/or distribution indicators could be appropriate. (ii) The points of advice raised at this workshop, alongside the combined approach aforementioned, and the final list of habitats for extent and/or distribution indicator development will be used to develop a prioritised list of actions to inform the next round of R&D proposals for benthic habitat indicator development in 2014. This will be done through technical discussions within JNCC and the relevant HBDSEG Subgroups. The preparation of recommendations by these groups should take into account existing work programmes, and consider the limited resources available to undertake any further R&D work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The highly structured nature of many digital signal processing operations allows these to be directly implemented as regular VLSI circuits. This feature has been successfully exploited in the design of a number of commercial chips, some examples of which are described. While many of the architectures on which such chips are based were originally derived on heuristic basis, there is an increasing interest in the development of systematic design techniques for the direct mapping of computations onto regular VLSI arrays. The purpose of this paper is to show how the the technique proposed by Kung can be readily extended to the design of VLSI signal processing chips where the organisation of computations at the level of individual data bits is of paramount importance. The technique in question allows architectures to be derived using the projection and retiming of data dependence graphs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Non-conventional database management systems are used to achieve a better performance when dealing with complex data. One fundamental concept of these systems is object identity (OID), because each object in the database has a unique identifier that is used to access and reference it in relationships to other objects. Two approaches can be used for the implementation of OIDs: physical or logical OIDs. In order to manage complex data, was proposed the Multimedia Data Manager Kernel (NuGeM) that uses a logical technique, named Indirect Mapping. This paper proposes an improvement to the technique used by NuGeM, whose original contribution is management of OIDs with a fewer number of disc accesses and less processing, thus reducing management time from the pages and eliminating the problem with exhaustion of OIDs. Also, the technique presented here can be applied to others OODBMSs. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Non-conventional database management systems are used to achieve a better performance when dealing with complex data. One fundamental concept of these systems is object identity (OID). Two techniques can be used for the implementation of OIDs: physical or logical. A logical implementation of OIDs, based on an Indirection Table, is used by NuGeM, a multimedia data manager kernel which is described in this paper. NuGeM Indirection Table allows the relocation of all pages in a database. The proposed strategy modifies the workings of this table so that it is possible to reduce considerably the number of I/O operations during the request and release of pages containing objects and their OIDs. Tests show a reduction of 84% in reading operations and a 67% reduction in writing operations when pages are requested. Although no changes were observed in writing operations during the release of pages, a 100% of reduction in reading operations was obtained. © 2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Side by side with the great advantages of plasmonics in nanoscale light confinement, the inevitable ohmic loss results in significant joule heating in plasmonic devices. Therefore, understanding optical-induced heat generation and heat transport in integrated on-chip plasmonic devices is of major importance. Specifically, there is a need for in situ visualization of electromagnetic induced thermal energy distribution with high spatial resolution. This paper studies the heat distribution in silicon plasmonic nanotips. Light is coupled to the plasmonic nanotips from a silicon nanowaveguide that is integrated with the tip on chip. Heat is generated by light absorption in the metal surrounding the silicon nanotip. The steady-state thermal distribution is studied numerically and measured experimentally using the approach of scanning thermal microscopy. It is shown that following the nanoscale heat generation by a 10 mW light source within a silicon photonic waveguide the temperature in the region of the nanotip is increased by ∼ 15 °C compared with the ambient temperature. Furthermore, we also perform a numerical study of the dynamics of the heat transport. Given the nanoscale dimensions of the structure, significant heating is expected to occur within the time frame of picoseconds. The capability of measuring temperature distribution of plasmonic structures at the nanoscale is shown to be a powerful tool and may be used in future applications related to thermal plasmonic applications such as control heating of liquids, thermal photovoltaic, nanochemistry, medicine, heat-assisted magnetic memories, and nanolithography.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Since 1999, the National Commission for the Knowledge and Use of the Biodiversity (CONABIO) in Mexico has been developing and managing the “Operational program for the detection of hot-spots using remote sensing techniques”. This program uses images from the MODerate resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites and from the Advanced Very High Resolution Radiometer of the National Oceanic and Atmospheric Administration (NOAA-AVHRR), which are operationally received through the Direct Readout station (DR) at CONABIO. This allows the near-real time monitoring of fire events in Mexico and Central America. In addition to the detection of active fires, the location of hot spots are classified with respect to vegetation types, accessibility, and risk to Nature Protection Areas (NPA). Besides the fast detection of fires, further analysis is necessary due to the considerable effects of forest fires on biodiversity and human life. This fire impact assessment is crucial to support the needs of resource managers and policy makers for adequate fire recovery and restoration actions. CONABIO attempts to meet these requirements, providing post-fire assessment products as part of the management system in particular for satellite-based burnt area mapping. This paper provides an overview of the main components of the operational system and will present an outlook to future activities and system improvements, especially the development of a burnt area product. A special focus will also be placed on the fire occurrence within NPAs of Mexico

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The New Zealand creative sector was responsible for almost 121,000 jobs at the time of the 2006 Census (6.3% of total employment). These are divided between • 35,751 creative specialists – persons employed doing creative work in creative industries • 42,300 support workers - persons providing management and support services in creative industries • 42,792 embedded creative workers – persons engaged in creative work in other types of enterprise The most striking feature of this breakdown is the fact that the largest group of creative workers are employed outside the creative industries, i.e. in other types of businesses. Even within the creative industries, there are fewer people directly engaged in creative work than in providing management and support. Creative sector employees earned incomes of approximately $52,000 per annum at the time of the 2006 Census. This is relatively uniform across all three types of creative worker, and is significantly above the average for all employed persons (of approximately $40,700). Creative employment and incomes were growing strongly over both five year periods between the 1996, 2001 and 2006 Censuses. However, when we compare creative and general trends, we see two distinct phases in the development of the creative sector: • rapid structural growth over the five years to 2001 (especially led by developments in ICT), with creative employment and incomes increasing rapidly at a time when they were growing modestly across the whole economy; • subsequent consolidation, with growth driven by more by national economic expansion than structural change, and creative employment and incomes moving in parallel with strong economy-wide growth. Other important trends revealed by the data are that • the strongest growth during the decade was in embedded creative workers, especially over the first five years. The weakest growth was in creative specialists, with support workers in creative industries in the middle rank, • by far the strongest growth in creative industries’ employment was in Software & digital content, which trebled in size over the decade Comparing New Zealand with the United Kingdom and Australia, the two southern hemisphere nations have significantly lower proportions of total employment in the creative sector (both in creative industries and embedded employment). New Zealand’s and Australia’s creative shares in 2001 were similar (5.4% each), but in the following five years, our share has expanded (to 5.7%) whereas Australia’s fell slightly (to 5.2%) – in both cases, through changes in creative industries’ employment. The creative industries generated $10.5 billion in total gross output in the March 2006 year. Resulting from this was value added totalling $5.1b, representing 3.3% of New Zealand’s total GDP. Overall, value added in the creative industries represents 49% of industry gross output, which is higher than the average across the whole economy, 45%. This is a reflection of the relatively high labour intensity and high earnings of the creative industries. Industries which have an above-average ratio of value added to gross output are usually labour-intensive, especially when wages and salaries are above average. This is true for Software & Digital Content and Architecture, Design & Visual Arts, with ratios of 60.4% and 55.2% respectively. However there is significant variation in this ratio between different parts of the creative industries, with some parts (e.g. Software & Digital Content and Architecture, Design & Visual Arts) generating even higher value added relative to output, and others (e.g. TV & Radio, Publishing and Music & Performing Arts) less, because of high capital intensity and import content. When we take into account the impact of the creative industries’ demand for goods and services from its suppliers and consumption spending from incomes earned, we estimate that there is an addition to economic activity of: • $30.9 billion in gross output, $41.4b in total • $15.1b in value added, $20.3b in total • 158,100 people employed, 234,600 in total The total economic impact of the creative industries is approximately four times their direct output and value added, and three times their direct employment. Their effect on output and value added is roughly in line with the average over all industries, although the effect on employment is significantly lower. This is because of the relatively high labour intensity (and high earnings) of the creative industries, which generate below-average demand from suppliers, but normal levels of demand though expenditure from incomes. Drawing on these numbers and conclusions, we suggest some (slightly speculative) directions for future research. The goal is to better understand the contribution the creative sector makes to productivity growth; in particular, the distinctive contributions from creative firms and embedded creative workers. The ideas for future research can be organised into the several categories: • Understanding the categories of the creative sector– who is doing the business? In other words, examine via more fine grained research (at a firm level perhaps) just what is the creative contribution from the different aspects of the creative sector industries. It may be possible to categorise these in terms of more or less striking innovations. • Investigate the relationship between the characteristics and the performance of the various creative industries/ sectors; • Look more closely at innovation at an industry level e.g. using an index of relative growth of exports, and see if this can be related to intensity of use of creative inputs; • Undertake case studies of the creative sector; • Undertake case studies of the embedded contribution to growth in the firms and industries that employ them, by examining taking several high performing noncreative industries (in the same way as proposed for the creative sector). • Look at the aggregates – drawing on the broad picture of the extent of the numbers of creative workers embedded within the different industries, consider the extent to which these might explain aspects of the industries’ varied performance in terms of exports, growth and so on. • This might be able to extended to examine issues like the type of creative workers that are most effective when embedded, or test the hypothesis that each industry has its own particular requirements for embedded creative workers that overwhelms any generic contributions from say design, or IT.