912 resultados para Colour and image sensitive detectors


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Silicoflagellate assemblages of ODP Leg 104 Neogene sequences are the basis of an interpretation of changes in the Neogene paleoenvironment of the Norwegian Sea. Fluctuations in the percentages of temperature and nutrient-sensitive taxonomic groups document major changes in sea-surface conditions. A brief, but distinct, cooling event occurred at 18.0-17.5 Ma which resulted in the disappearance of Naviculopsis. Following this early Miocene cooling a long period of increasing surface-water temperature occurred, leading up to a thermal high in the early middle Miocene (14.0 Ma). The early late Miocene (10.0-9.0 Ma) was distinctly cooler than the middle Miocene, but warmer than the remainder of the Neogene. Conditions between 13.0 and 10.0 Ma are unrecorded because of a regional hiatus, which is the earliest evidence for an end to the more temperate and stable conditions of the early to middle middle Miocene. A major plunge in temperatures occurred between 8.5 and 7.4 Ma and during the remainder of the late Miocene and Pliocene; from 7.4 to 2.65 Ma subpolar conditions prevailed. Silicoflagellates disappeared, except for sporadic occurrences, at 2.64 Ma with the beginning of dominant glacial sedimentation. Biogenic opal is absent in sediments younger than 0.76 Ma, indicating the dominance of glacial conditions with extensive sea ice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Specimens of Bolivina argentea and Bulimina marginata, two widely distributed temperate benthic foraminiferal species, were cultured at constant temperature and controlled pCO2 (ambient, 1000 ppmv, and 2000 ppmv) for six weeks to assess the effect of elevated atmospheric CO2 concentrations on survival and fitness using Adenosine Triphosphate (ATP) analyses and on shell microfabric using high-resolution SEM and image analysis. To characterize the carbonate chemistry of the incubation seawater, total alkalinity and dissolved inorganic carbon were measured approximately every two weeks. Survival and fitness were not directly affected by elevated pCO2 and the concomitant decrease in seawater pH and calcite saturation states (Omega c), even when seawater was undersaturated with respect to calcite. These results differ from some previous observations that ocean acidification can cause a variety of effects on benthic foraminifera, including test dissolution, decreased growth, and mottling (loss of symbiont color in symbiont-bearing species), suggesting that the benthic foraminiferal response to ocean acidification may be species specific. If so, this implies that ocean acidification may lead to ecological winners and losers even within the same taxonomic group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spatial and temporal dynamics of seagrasses have been studied from the leaf to patch (100 m**2) scales. However, landscape scale (> 100 km**2) seagrass population dynamics are unresolved in seagrass ecology. Previous remote sensing approaches have lacked the temporal or spatial resolution, or ecologically appropriate mapping, to fully address this issue. This paper presents a robust, semi-automated object-based image analysis approach for mapping dominant seagrass species, percentage cover and above ground biomass using a time series of field data and coincident high spatial resolution satellite imagery. The study area was a 142 km**2 shallow, clear water seagrass habitat (the Eastern Banks, Moreton Bay, Australia). Nine data sets acquired between 2004 and 2013 were used to create seagrass species and percentage cover maps through the integration of seagrass photo transect field data, and atmospherically and geometrically corrected high spatial resolution satellite image data (WorldView-2, IKONOS and Quickbird-2) using an object based image analysis approach. Biomass maps were derived using empirical models trained with in-situ above ground biomass data per seagrass species. Maps and summary plots identified inter- and intra-annual variation of seagrass species composition, percentage cover level and above ground biomass. The methods provide a rigorous approach for field and image data collection and pre-processing, a semi-automated approach to extract seagrass species and cover maps and assess accuracy, and the subsequent empirical modelling of seagrass biomass. The resultant maps provide a fundamental data set for understanding landscape scale seagrass dynamics in a shallow water environment. Our findings provide proof of concept for the use of time-series analysis of remotely sensed seagrass products for use in seagrass ecology and management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to document changes in Holocene glacier extent and activity in NE Greenland (~73° N) we study marine sediment records that extend from the fjords (PS2631 and PS2640), across the shelf (PS2623 and PS2641), to the Greenland Sea (JM07-174GC). The primary bedrock geology of the source areas is the Caledonian sediment outcrop, including Devonian red beds, plus early Neoproterozoic gneisses and early Tertiary volcanics. We examine the variations in colour (CIE*), grain size, and bulk mineralogy (from X-ray diffraction of the <2 mm sediment fraction). Fjord core PS2640 in Sofia Sund, with a marked red hue, is distinct in grain size, colour and mineralogy from the other fjord and shelf cores. Five distinct grain-size modes are distinguished of which only one is associated with a coarse ice-rafting signal - this mode is rare in the mid- and late Holocene. A sediment unmixing program (SedUnMixMC) is used to characterize down-core changes in sediment composition based on the upper late Holocene sediments from cores PS2640 (Sofia Sund), PS2631 (Kaiser Franz Joseph Fjord) and PS2623 (south of Shannon Is), and surface samples from the Kara Sea (as an indicator of transport from the Russian Arctic shelves). Major changes in mineral composition are noted in all cores with possible coeval shifts centred c. 2.5, 4.5 and 7.5 cal. ka BP (±0.5 ka) but are rarely linked with changes in the grain-size spectra. Coarse IRD (>2 mm) and IRD-grain-size spectra are rare in the last 9-10 cal. ka BP and, in contrast with areas farther south (~68° N), there is no distinct IRD signal at the onset of neoglaciation. Our paper demonstrates the importance of the quantitative analysis of sediment properties in clarifying source to sink changes in glacial marine environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase of multimedia services delivered over packet-based networks has entailed greater quality expectations of the end-users. This has led to an intensive research on techniques for evaluating the quality of experience perceived by the viewers of audiovisual content, considering the different degradations that it could suffer along the broadcasting system. In this paper, a comprehensive study of the impact of transmission errors affecting video and audio in IPTV is presented. With this aim, subjective assessment tests were carried out proposing a novel methodology trying to keep as close as possible home environment viewing conditions. Also 3DTV content in side-by-side format has been used in the experiments to compare the impact of the degradations. The results provide a better understanding of the effects of transmission errors, and show that the QoE related to the first approach of 3DTV is acceptable, but the visual discomfort that it causes should be reduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for fast colour and geometric correction of a tiled display system is presented in this paper. Such kind of displays are a common choice for virtual reality applications and simulators, where a high resolution image is required. They are the cheapest and more flexible alternative for large image generation but they require a precise geometric and colour correction. The purpose of the proposed method is to correct the projection system as fast as possible so in case the system needs to be recalibrated it doesn’t interfere with the normal operation of the simulator or virtual reality application. This technique makes use of a single conventional webcam for both geometric and photometric correction. Some previous assumptions are made, like planar projection surface and negligibleintra-projector colour variation and black-offset levels. If these assumptions hold true, geometric and photometric seamlessness can be achievedfor this kind of display systems. The method described in this paper is scalable for an undefined number of projectors and completely automatic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a level set based variational approach that incorporates shape priors into edge-based and region-based models. The evolution of the active contour depends on local and global information. It has been implemented using an efficient narrow band technique. For each boundary pixel we calculate its dynamic according to its gray level, the neighborhood and geometric properties established by training shapes. We also propose a criterion for shape aligning based on affine transformation using an image normalization procedure. Finally, we illustrate the benefits of the our approach on the liver segmentation from CT images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high resolution focused beam line has been recently installed on the AIFIRA (“Applications Interdisciplinaires des Faisceaux d’Ions en Région Aquitaine”) facility at CENBG. This nanobeam line, based on a doublet–triplet configuration of Oxford Microbeam Ltd. OM-50™ quadrupoles, offers the opportunity to focus protons, deuterons and alpha particles in the MeV energy range to a sub-micrometer beam spot. The beam optics design has been studied in detail and optimized using detailed ray-tracing simulations and the full mechanical design of the beam line was reported in the Debrecen ICNMTA conference in 2008. During the last two years, the lenses have been carefully aligned and the target chamber has been fully equipped with particle and X-ray detectors, microscopes and precise positioning stages. The beam line is now operational and has been used for its firstapplications to ion beam analysis. Interestingly, this set-up turned out to be a very versatile tool for a wide range of applications. Indeed, even if it was not intended during the design phase, the ion optics configuration offers the opportunity to work either with a high current microbeam (using the triplet only) or with a lower current beam presenting a sub-micrometer resolution (using the doublet–triplet configuration). The performances of the CENBGnanobeam line are presented for both configurations. Quantitative data concerning the beam lateral resolutions at different beam currents are provided. Finally, the firstresults obtained for different types of application are shown, including nuclear reaction analysis at the micrometer scale and the firstresults on biological samples

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis analiza los elementos que afectan a la evaluación del rendimiento dentro de la técnica de radiodiagnóstico mediante tomografía por emisión de positrones (PET), centrándose en escáneres preclínicos. Se exploran las posibilidades de los protocolos estándar de evaluación sobre los siguientes aspectos: su uso como herramienta para validar programas de simulación Montecarlo, como método para la comparación de escáneres y su validez en el estudio del efecto sobre la calidad de imagen al utilizar radioisótopos alternativos. Inicialmente se estudian los métodos de evaluación orientados a la validación de simulaciones PET, para ello se presenta el programa GAMOS como entorno de simulación y se muestran los resultados de su validación basada en el estándar NEMA NU 4-2008 para escáneres preclínicos. Esta validación se ha realizado mediante la comparación de los resultados simulados frente a adquisiciones reales en el equipo ClearPET, describiendo la metodología de evaluación y selección de los parámetros NEMA. En este apartado también se mencionan las aportaciones desarrolladas en GAMOS para aplicaciones PET, como la inclusión de herramientas para la reconstrucción de imágenes. Por otro lado, la evaluación NEMA del ClearPET es utilizada para comparar su rendimiento frente a otro escáner preclínico: el sistema rPET-1. Esto supone la primera caracterización NEMA NU 4 completa de ambos equipos; al mismo tiempo que se analiza cómo afectan las importantes diferencias de diseño entre ellos, especialmente el tamaño axial del campo de visión y la configuración de los detectores. El 68Ga es uno de los radioisótopos no convencionales en imagen PET que está experimentando un mayor desarrollo, sin embargo, presenta la desventaja del amplio rango o distancia recorrida por el positrón emitido. Además del rango del positrón, otra propiedad física característica de los radioisótopos PET que puede afectar a la imagen es la emisión de fotones gamma adicionales, tal como le ocurre al isótopo 48V. En esta tesis se evalúan dichos efectos mediante estudios de resolución espacial y calidad de imagen NEMA. Finalmente, se analiza el alcance del protocolo NEMA NU 4-2008 cuando se utiliza para este propósito, adaptándolo a tal fin y proponiendo posibles modificaciones. Abstract This thesis analyzes the factors affecting the performance evaluation in positron emission tomography (PET) imaging, focusing on preclinical scanners. It explores the possibilities of standard protocols of assessment on the following aspects: their use as tools to validate Monte Carlo simulation programs, their usefulness as a method for comparing scanners and their validity in the study of the effect of alternative radioisotopes on image quality. Initially we study the methods of performance evaluation oriented to validate PET simulations. For this we present the GAMOS program as a simulation framework and show the results of its validation based on the standard NEMA NU 4-2008 for preclinical PET scanners. This has been accomplished by comparing simulated results against experimental acquisitions in the ClearPET scanner, describing the methodology for the evaluation and selection of NEMA parameters. This section also mentions the contributions developed in GAMOS for PET applications, such as the inclusion of tools for image reconstruction. Furthermore, the evaluation of the ClearPET scanner is used to compare its performance against another preclinical scanner, specifically the rPET-1 system. This is the first complete NEMA NU 4 based characterization study of both systems. At the same time we analyze how do the significant design differences of these two systems, especially the size of the axial field of view and the detectors configuration affect their performance characteristics. 68Ga is one of the unconventional radioisotopes in PET imaging the use of which is currently significantly increasing; however, it presents the disadvantage of the long positron range (distance traveled by the emitted positron before annihilating with an electron). Besides the positron range, additional gamma photon emission is another physical property characteristic of PET radioisotopes that can affect the reconstructed image quality, as it happens to the isotope 48V. In this thesis we assess these effects through studies of spatial resolution and image quality. Finally, we analyze the scope of the NEMA NU 4-2008 to carry out such studies, adapting it and proposing possible modifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is easy to get frustrated at spoken conversational agents (SCAs), perhaps because they seem to be callous. By and large, the quality of human-computer interaction is affected due to the inability of the SCAs to recognise and adapt to user emotional state. Now with the mass appeal of artificially-mediated communication, there has been an increasing need for SCAs to be socially and emotionally intelligent, that is, to infer and adapt to their human interlocutors’ emotions on the fly, in order to ascertain an affective, empathetic and naturalistic interaction. An enhanced quality of interaction would reduce users’ frustrations and consequently increase their satisfactions. These reasons have motivated the development of SCAs towards including socio-emotional elements, turning them into affective and socially-sensitive interfaces. One barrier to the creation of such interfaces has been the lack of methods for modelling emotions in a task-independent environment. Most emotion models for spoken dialog systems are task-dependent and thus cannot be used “as-is” in different applications. This Thesis focuses on improving this, in which it concerns computational modeling of emotion, personality and their interrelationship for task-independent autonomous SCAs. The generation of emotion is driven by needs, inspired by human’s motivational systems. The work in this Thesis is organised in three stages, each one with its own contribution. The first stage involved defining, integrating and quantifying the psychological-based motivational and emotional models sourced from. Later these were transformed into a computational model by implementing them into software entities. The computational model was then incorporated and put to test with an existing SCA host, a HiFi-control agent. The second stage concerned automatic prediction of affect, which has been the main challenge towards the greater aim of infusing social intelligence into the HiFi agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. In this stage, we attempted to address part of this challenge by considering the roles of user satisfaction ratings and conversational/dialog features as the respective target and predictors in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. The final stage concerned the evaluation of the emotional model through the HiFi agent. A series of user studies with 70 subjects were conducted in a real-time environment, each in a different phase and with its own conditions. All the studies involved the comparisons between the baseline non-modified and the modified agent. The findings have gone some way towards enhancing our understanding of the utility of emotion in spoken dialog systems in several ways; first, an SCA should not express its emotions blindly, albeit positive. Rather, it should adapt its emotions to user states. Second, low performance in an SCA may be compensated by the exploitation of emotion. Third, the expression of emotion through the exploitation of prosody could better improve users’ perceptions of an SCA compared to exploiting emotions through just lexical contents. Taken together, these findings not only support the success of the emotional model, but also provide substantial evidences with respect to the benefits of adding emotion in an SCA, especially in mitigating users’ frustrations and ultimately improving their satisfactions. Resumen Es relativamente fácil experimentar cierta frustración al interaccionar con agentes conversacionales (Spoken Conversational Agents, SCA), a menudo porque parecen ser un poco insensibles. En general, la calidad de la interacción persona-agente se ve en cierto modo afectada por la incapacidad de los SCAs para identificar y adaptarse al estado emocional de sus usuarios. Actualmente, y debido al creciente atractivo e interés de dichos agentes, surge la necesidad de hacer de los SCAs unos seres cada vez más sociales y emocionalmente inteligentes, es decir, con capacidad para inferir y adaptarse a las emociones de sus interlocutores humanos sobre la marcha, de modo que la interacción resulte más afectiva, empática y, en definitiva, natural. Una interacción mejorada en este sentido permitiría reducir la posible frustración de los usuarios y, en consecuencia, mejorar el nivel de satisfacción alcanzado por los mismos. Estos argumentos justifican y motivan el desarrollo de nuevos SCAs con capacidades socio-emocionales, dotados de interfaces afectivas y socialmente sensibles. Una de las barreras para la creación de tales interfaces ha sido la falta de métodos de modelado de emociones en entornos independientes de tarea. La mayoría de los modelos emocionales empleados por los sistemas de diálogo hablado actuales son dependientes de tarea y, por tanto, no pueden utilizarse "tal cual" en diferentes dominios o aplicaciones. Esta tesis se centra precisamente en la mejora de este aspecto, la definición de modelos computacionales de las emociones, la personalidad y su interrelación para SCAs autónomos e independientes de tarea. Inspirada en los sistemas motivacionales humanos en el ámbito de la psicología, la tesis propone un modelo de generación/producción de la emoción basado en necesidades. El trabajo realizado en la presente tesis está organizado en tres etapas diferenciadas, cada una con su propia contribución. La primera etapa incluyó la definición, integración y cuantificación de los modelos motivacionales de partida y de los modelos emocionales derivados a partir de éstos. Posteriormente, dichos modelos emocionales fueron plasmados en un modelo computacional mediante su implementación software. Este modelo computacional fue incorporado y probado en un SCA anfitrión ya existente, un agente con capacidad para controlar un equipo HiFi, de alta fidelidad. La segunda etapa se orientó hacia el reconocimiento automático de la emoción, aspecto que ha constituido el principal desafío en relación al objetivo mayor de infundir inteligencia social en el agente HiFi. En los últimos años, los estudios sobre reconocimiento de emociones a partir de la voz han pasado de emplear datos actuados a usar datos reales en los que la presencia u observación de emociones se produce de una manera mucho más sutil. El reconocimiento de emociones bajo estas condiciones resulta mucho más complicado y esta dificultad se pone de manifiesto en tareas tales como el etiquetado y el aprendizaje automático. En esta etapa, se abordó el problema del reconocimiento de las emociones del usuario a partir de características o métricas derivadas del propio diálogo usuario-agente. Gracias a dichas métricas, empleadas como predictores o indicadores del grado o nivel de satisfacción alcanzado por el usuario, fue posible discriminar entre satisfacción y frustración, las dos emociones prevalentes durante la interacción usuario-agente. La etapa final corresponde fundamentalmente a la evaluación del modelo emocional por medio del agente Hifi. Con ese propósito se llevó a cabo una serie de estudios con usuarios reales, 70 sujetos, interaccionando con diferentes versiones del agente Hifi en tiempo real, cada uno en una fase diferente y con sus propias características o capacidades emocionales. En particular, todos los estudios realizados han profundizado en la comparación entre una versión de referencia del agente no dotada de ningún comportamiento o característica emocional, y una versión del agente modificada convenientemente con el modelo emocional propuesto. Los resultados obtenidos nos han permitido comprender y valorar mejor la utilidad de las emociones en los sistemas de diálogo hablado. Dicha utilidad depende de varios aspectos. En primer lugar, un SCA no debe expresar sus emociones a ciegas o arbitrariamente, incluso aunque éstas sean positivas. Más bien, debe adaptar sus emociones a los diferentes estados de los usuarios. En segundo lugar, un funcionamiento relativamente pobre por parte de un SCA podría compensarse, en cierto modo, dotando al SCA de comportamiento y capacidades emocionales. En tercer lugar, aprovechar la prosodia como vehículo para expresar las emociones, de manera complementaria al empleo de mensajes con un contenido emocional específico tanto desde el punto de vista léxico como semántico, ayuda a mejorar la percepción por parte de los usuarios de un SCA. Tomados en conjunto, los resultados alcanzados no sólo confirman el éxito del modelo emocional, sino xv que constituyen además una evidencia decisiva con respecto a los beneficios de incorporar emociones en un SCA, especialmente en cuanto a reducir el nivel de frustración de los usuarios y, en última instancia, mejorar su satisfacción.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, novel imaging designs with a single optical surface (either refractive or reflective) are presented. In some of these designs, both object and image shapes are given but mapping from object to image is obtained as a result of the design. In other designs, not only the mapping is obtained in the design process, but also the shape of the object is found. In the examples considered, the image is virtual and located at infinity and is seen from known pupil, which can emulate a human eye. In the first introductory part, 2D designs have been done using three different design methods: a SMS design, a compound Cartesian oval surface, and a differential equation method for the limit case of small pupil. At the point-size pupil limit, it is proven that these three methods coincide. In the second part, previous 2D designs are extended to 3D by rotation and the astigmatism of the image has been studied. As an advanced variation, the differential equation method is used to provide the freedom to control the tangential rays and sagittal rays simultaneously. As a result, designs without astigmatism (at the small pupil limit) on a curved object surface have been obtained. Finally, this anastigmatic differential equation method has been extended to 3D for the general case, in which freeform surfaces are designed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deployment of nodes in Wireless Sensor Networks (WSNs) arises as one of the biggest challenges of this field, which involves in distributing a large number of embedded systems to fulfill a specific application. The connectivity of WSNs is difficult to estimate due to the irregularity of the physical environment and affects the WSN designers? decision on deploying sensor nodes. Therefore, in this paper, a new method is proposed to enhance the efficiency and accuracy on ZigBee propagation simulation in indoor environments. The method consists of two steps: automatic 3D indoor reconstruction and 3D ray-tracing based radio simulation. The automatic 3D indoor reconstruction employs unattended image classification algorithm and image vectorization algorithm to build the environment database accurately, which also significantly reduces time and efforts spent on non-radio propagation issue. The 3D ray tracing is developed by using kd-tree space division algorithm and a modified polar sweep algorithm, which accelerates the searching of rays over the entire space. Signal propagation model is proposed for the ray tracing engine by considering both the materials of obstacles and the impact of positions along the ray path of radio. Three different WSN deployments are realized in the indoor environment of an office and the results are verified to be accurate. Experimental results also indicate that the proposed method is efficient in pre-simulation strategy and 3D ray searching scheme and is suitable for different indoor environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In developing instrumentation for the measurement of fruit quality, there is the need for fast and non-destructive devices, based on sensors, to be installed on-line. In the case of some fruits, like peaches, post-harvest ripeness, which is closely related to high quality for the consumer, is a priority. During ripening, external appearance (colour) and internal mechanical (firmness) and chemical (sugars and acids) quality are main features that evolve rapidly from and unripe to a ripe (high quality) stage. When considering the evolution of fruit quality in this scheme, external colour and firmness are shown to evolve in a parallel pattern, if monitored from the time of harvest to full consumer ripeness ( Rood, 1957; Crisosto et al, 1995; Kader, 1996). The visible (VIS) reflectance spectrum is a fast and easy reference that can be used to estimate quality of peaches, if we could show it to be reliably correlated with peach ripening rate during postharvest (Genard et al. 1994; Moras, 1995; Delwiche and Baumgartner, 1983; Delwiche et al. 1987; Slaughter, 1995; Lleo et al., 1998). Taste, described as an expert acceptance score, improves with ripeness (firmness and colour evolution), when considering the fruits on the tree, and also post-harvest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main concerns of evolvable and adaptive systems is the need of a training mechanism, which is normally done by using a training reference and a test input. The fitness function to be optimized during the evolution (training) phase is obtained by comparing the output of the candidate systems against the reference. The adaptivity that this type of systems may provide by re-evolving during operation is especially important for applications with runtime variable conditions. However, fully automated self-adaptivity poses additional problems. For instance, in some cases, it is not possible to have such reference, because the changes in the environment conditions are unknown, so it becomes difficult to autonomously identify which problem requires to be solved, and hence, what conditions should be representative for an adequate re-evolution. In this paper, a solution to solve this dependency is presented and analyzed. The system consists of an image filter application mapped on an evolvable hardware platform, able to evolve using two consecutive frames from a camera as both test and reference images. The system is entirely mapped in an FPGA, and native dynamic and partial reconfiguration is used for evolution. It is also shown that using such images, both of them being noisy, as input and reference images in the evolution phase of the system is equivalent or even better than evolving the filter with offline images. The combination of both techniques results in the completely autonomous, noise type/level agnostic filtering system without reference image requirement described along the paper.