921 resultados para Multi-channel access
Resumo:
An experimental method is described which enables the inelastically scattered X-ray component to be removed from diffractometer data prior to radial density function analysis. At each scattering angle an energy spectrum is generated from a Si(Li) detector combined with a multi-channel analyser from which the coherently scattered component is separated. The data obtained from organic polymers has an improved signal/noise ratio at high values of scattering angle, and a commensurate enhancement of resolution of the RDF at low r is demonstrated for the case of PMMA (ICI `Perspex'). The method obviates the need for the complicated correction for multiple scattering.
Resumo:
We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI), with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR), the Goddard Profiling Algorithm (GPROF), and a multi-channel linear regression statistical method (MLRS). We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS) error against rain gauge data for 16 typhoon overpasses in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals outperform those retrieved from GPROF and MLRS. Overall, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Accurate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.
Resumo:
Purpose – This paper aims to provide a brief re´sume´ of previous research which has analysed the impact of e-commerce on retail real estate in the UK, and to examine the important marketing role of the internet for shopping centre managers, and retail landlords. Design/methodology/approach – Based on the results from a wider study carried out in 2003, the paper uses case studies from two different shopping centres in the UK, and documents the innovative uses of both web-based marketing and online retailing by organisations that historically have not directly been involved in the retailing process. Findings – The paper highlights the importance of considering online sales within a multi-channel approach to retailing. The two types of emerging shopping centre model which are identified are characterised by their ultimate relationship with the physical shopping centre on whose web site they reside. These can be summarised as: the “centre-led” approach, and the “brand-led” or “marketing-led” approach. Research limitations/implications – The research is based on a limited number of in-depth case studies and secondary data. Further research is needed to monitor the continuing impact of e-commerce on retail property and the marketing strategies of shopping centre managers and owners. Practical implications – Internet-based sales provide an important adjunct to conventional retail sales and an important source of potential risk for landlords and tenants in the real estate investment market. Regardless of whether retailers use the internet as a sales channel, as a product-sourcing tool, or merely to provide information to the consumer, the internet has become a keystone within the greater retail marketing mix. The findings have ramifications for understanding the way in which landlords are structuring their retail property to defray potential risks. Originality/value – The paper examines shopping centre online marketing models for the first time in detail, and will be of value to retail occupiers, owners and other stakeholders of shopping centres.
Resumo:
Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This article describes the simulation and analysis of collisionless optical interconnection network, which the objective is to achieve a high performance level based on a single protocol control. The optical coupler has one shared control channel and N communication channels. Each network node two communication modules one for packet transmission/reception and another for control channel access. We show by simulation that system achieves a high performance and ensures high scalability.
Resumo:
Plant phenology has gained importance in the context of global change research, stimulating the development of new technologies for phenological observation. Digital cameras have been successfully used as multi-channel imaging sensors, providing measures of leaf color change information (RGB channels), or leafing phenological changes in plants. We monitored leaf-changing patterns of a cerrado-savanna vegetation by taken daily digital images. We extract RGB channels from digital images and correlated with phenological changes. Our first goals were: (1) to test if the color change information is able to characterize the phenological pattern of a group of species; and (2) to test if individuals from the same functional group may be automatically identified using digital images. In this paper, we present a machine learning approach to detect phenological patterns in the digital images. Our preliminary results indicate that: (1) extreme hours (morning and afternoon) are the best for identifying plant species; and (2) different plant species present a different behavior with respect to the color change information. Based on those results, we suggest that individuals from the same functional group might be identified using digital images, and introduce a new tool to help phenology experts in the species identification and location on-the-ground. ©2012 IEEE.
Resumo:
Plant phenology is one of the most reliable indicators of species responses to global climate change, motivating the development of new technologies for phenological monitoring. Digital cameras or near remote systems have been efficiently applied as multi-channel imaging sensors, where leaf color information is extracted from the RGB (Red, Green, and Blue) color channels, and the changes in green levels are used to infer leafing patterns of plant species. In this scenario, texture information is a great ally for image analysis that has been little used in phenology studies. We monitored leaf-changing patterns of Cerrado savanna vegetation by taking daily digital images. We extract RGB channels from the digital images and correlate them with phenological changes. Additionally, we benefit from the inclusion of textural metrics for quantifying spatial heterogeneity. Our first goals are: (1) to test if color change information is able to characterize the phenological pattern of a group of species; (2) to test if the temporal variation in image texture is useful to distinguish plant species; and (3) to test if individuals from the same species may be automatically identified using digital images. In this paper, we present a machine learning approach based on multiscale classifiers to detect phenological patterns in the digital images. Our results indicate that: (1) extreme hours (morning and afternoon) are the best for identifying plant species; (2) different plant species present a different behavior with respect to the color change information; and (3) texture variation along temporal images is promising information for capturing phenological patterns. Based on those results, we suggest that individuals from the same species and functional group might be identified using digital images, and introduce a new tool to help phenology experts in the identification of new individuals from the same species in the image and their location on the ground. © 2013 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We analyse the influence of colour information in optical flow methods. Typically, most of these techniques compute their solutions using grayscale intensities due to its simplicity and faster processing, ignoring the colour features. However, the current processing systems have minimized their computational cost and, on the other hand, it is reasonable to assume that a colour image offers more details from the scene which should facilitate finding better flow fields. The aim of this work is to determine if a multi-channel approach supposes a quite enough improvement to justify its use. In order to address this evaluation, we use a multi-channel implementation of a well-known TV-L1 method. Furthermore, we review the state-of-the-art in colour optical flow methods. In the experiments, we study various solutions using grayscale and RGB images from recent evaluation datasets to verify the colour benefits in motion estimation.
Resumo:
Human reactions to vibration have been extensively investigated in the past. Vibration, as well as whole-body vibration (WBV), has been commonly considered as an occupational hazard for its detrimental effects on human condition and comfort. Although long term exposure to vibrations may produce undesirable side-effects, a great part of the literature is dedicated to the positive effects of WBV when used as method for muscular stimulation and as an exercise intervention. Whole body vibration training (WBVT) aims to mechanically activate muscles by eliciting neuromuscular activity (muscle reflexes) via the use of vibrations delivered to the whole body. The most mentioned mechanism to explain the neuromuscular outcomes of vibration is the elicited neuromuscular activation. Local tendon vibrations induce activity of the muscle spindle Ia fibers, mediated by monosynaptic and polysynaptic pathways: a reflex muscle contraction known as the Tonic Vibration Reflex (TVR) arises in response to such vibratory stimulus. In WBVT mechanical vibrations, in a range from 10 to 80 Hz and peak to peak displacements from 1 to 10 mm, are usually transmitted to the patient body by the use of oscillating platforms. Vibrations are then transferred from the platform to a specific muscle group through the subject body. To customize WBV treatments, surface electromyography (SEMG) signals are often used to reveal the best stimulation frequency for each subject. Use of SEMG concise parameters, such as root mean square values of the recordings, is also a common practice; frequently a preliminary session can take place in order to discover the more appropriate stimulation frequency. Soft tissues act as wobbling masses vibrating in a damped manner in response to mechanical excitation; Muscle Tuning hypothesis suggest that neuromuscular system works to damp the soft tissue oscillation that occurs in response to vibrations; muscles alters their activity to dampen the vibrations, preventing any resonance phenomenon. Muscle response to vibration is however a complex phenomenon as it depends on different parameters, like muscle-tension, muscle or segment-stiffness, amplitude and frequency of the mechanical vibration. Additionally, while in the TVR study the applied vibratory stimulus and the muscle conditions are completely characterised (a known vibration source is applied directly to a stretched/shortened muscle or tendon), in WBV study only the stimulus applied to a distal part of the body is known. Moreover, mechanical response changes in relation to the posture. The transmissibility of vibratory stimulus along the body segment strongly depends on the position held by the subject. The aim of this work was the investigation on the effects that the use of vibrations, in particular the effects of whole body vibrations, may have on muscular activity. A new approach to discover the more appropriate stimulus frequency, by the use of accelerometers, was also explored. Different subjects, not affected by any known neurological or musculoskeletal disorders, were voluntarily involved in the study and gave their informed, written consent to participate. The device used to deliver vibration to the subjects was a vibrating platform. Vibrations impressed by the platform were exclusively vertical; platform displacement was sinusoidal with an intensity (peak-to-peak displacement) set to 1.2 mm and with a frequency ranging from 10 to 80 Hz. All the subjects familiarized with the device and the proper positioning. Two different posture were explored in this study: position 1 - hack squat; position 2 - subject standing on toes with heels raised. SEMG signals from the Rectus Femoris (RF), Vastus Lateralis (VL) and Vastus medialis (VM) were recorded. SEMG signals were amplified using a multi-channel, isolated biomedical signal amplifier The gain was set to 1000 V/V and a band pass filter (-3dB frequency 10 - 500 Hz) was applied; no notch filters were used to suppress line interference. Tiny and lightweight (less than 10 g) three-axial MEMS accelerometers (Freescale semiconductors) were used to measure accelerations of onto patient’s skin, at EMG electrodes level. Accelerations signals provided information related to individuals’ RF, Biceps Femoris (BF) and Gastrocnemius Lateralis (GL) muscle belly oscillation; they were pre-processed in order to exclude influence of gravity. As demonstrated by our results, vibrations generate peculiar, not negligible motion artifact on skin electrodes. Artifact amplitude is generally unpredictable; it appeared in all the quadriceps muscles analysed, but in different amounts. Artifact harmonics extend throughout the EMG spectrum, making classic high-pass filters ineffective; however, their contribution was easy to filter out from the raw EMG signal with a series of sharp notch filters centred at the vibration frequency and its superior harmonics (1.5 Hz wide). However, use of these simple filters prevents the revelation of EMG power potential variation in the mentioned filtered bands. Moreover our experience suggests that the possibility of reducing motion artefact, by using particular electrodes and by accurately preparing the subject’s skin, is not easily viable; even though some small improvements were obtained, it was not possible to substantially decrease the artifact. Anyway, getting rid of those artifacts lead to some true EMG signal loss. Nevertheless, our preliminary results suggest that the use of notch filters at vibration frequency and its harmonics is suitable for motion artifacts filtering. In RF SEMG recordings during vibratory stimulation only a little EMG power increment should be contained in the mentioned filtered bands due to synchronous electromyographic activity of the muscle. Moreover, it is better to remove the artifact that, in our experience, was found to be more than 40% of the total signal power. In summary, many variables have to be taken into account: in addition to amplitude, frequency and duration of vibration treatment, other fundamental variables were found to be subject anatomy, individual physiological condition and subject’s positioning on the platform. Studies on WBV treatments that include surface EMG analysis to asses muscular activity during vibratory stimulation should take into account the presence of motion artifacts. Appropriate filtering of artifacts, to reveal the actual effect on muscle contraction elicited by vibration stimulus, is mandatory. However as a result of our preliminary study, a simple multi-band notch filtering may help to reduce randomness of the results. Muscle tuning hypothesis seemed to be confirmed. Our results suggested that the effects of WBV are linked to the actual muscle motion (displacement). The greater was the muscle belly displacement the higher was found the muscle activity. The maximum muscle activity has been found in correspondence with the local mechanical resonance, suggesting a more effective stimulation at the specific system resonance frequency. Holding the hypothesis that muscle activation is proportional to muscle displacement, treatment optimization could be obtained by simply monitoring local acceleration (resonance). However, our study revealed some short term effects of vibratory stimulus; prolonged studies should be assembled in order to consider the long term effectiveness of these results. Since local stimulus depends on the kinematic chain involved, WBV muscle stimulation has to take into account the transmissibility of the stimulus along the body segment in order to ensure that vibratory stimulation effectively reaches the target muscle. Combination of local resonance and muscle response should also be further investigated to prevent hazards to individuals undergoing WBV treatments.
Resumo:
Research work carried out in focusing a novel multiphase-multilevel ac motor drive system much suitable for low-voltage high-current power applications. In specific, six-phase asymmetrical induction motor with open-end stator winding configuration, fed from four standard two-level three-phase voltage source inverters (VSIs). Proposed synchronous reference frame control algorithm shares the total dc source power among the 4 VSIs in each switching cycle with three degree of freedom. Precisely, first degree of freedom concerns with the current sharing between two three-phase stator windings. Based on modified multilevel space vector pulse width modulation shares the voltage between each single VSIs of two three-phase stator windings with second and third degree of freedom, having proper multilevel output waveforms. Complete model of whole ac motor drive based on three-phase space vector decomposition approach was developed in PLECS - numerical simulation software working in MATLAB environment. Proposed synchronous reference control algorithm was framed in MATLAB with modified multilevel space vector pulse width modulator. The effectiveness of the entire ac motor drives system was tested. Simulation results are given in detail to show symmetrical and asymmetrical, power sharing conditions. Furthermore, the three degree of freedom are exploited to investigate fault tolerant capabilities in post-fault conditions. Complete set of simulation results are provided when one, two and three VSIs are faulty. Hardware prototype model of quad-inverter was implemented with two passive three-phase open-winding loads using two TMS320F2812 DSP controllers. Developed McBSP (multi-channel buffered serial port) communication algorithm able to control the four VSIs for PWM communication and synchronization. Open-loop control scheme based on inverse three-phase decomposition approach was developed to control entire quad-inverter configuration and tested with balanced and unbalanced operating conditions with simplified PWM techniques. Both simulation and experimental results are always in good agreement with theoretical developments.
Resumo:
Wireless Sensor Networks (WSNs) are getting wide-spread attention since they became easily accessible with their low costs. One of the key elements of WSNs is distributed sensing. When the precise location of a signal of interest is unknown across the monitored region, distributing many sensors randomly/uniformly may yield with a better representation of the monitored random process than a traditional sensor deployment. In a typical WSN application the data sensed by nodes is usually sent to one (or more) central device, denoted as sink, which collects the information and can either act as a gateway towards other networks (e.g. Internet), where data can be stored, or be processed in order to command the actuators to perform special tasks. In such a scenario, a dense sensor deployment may create bottlenecks when many nodes competing to access the channel. Even though there are mitigation methods on the channel access, concurrent (parallel) transmissions may occur. In this study, always on the scope of monitoring applications, the involved development progress of two industrial projects with dense sensor deployments (eDIANA Project funded by European Commission and Centrale Adritica Project funded by Coop Italy) and the measurement results coming from several different test-beds evoked the necessity of a mathematical analysis on concurrent transmissions. To the best of our knowledge, in the literature there is no mathematical analysis of concurrent transmission in 2.4 GHz PHY of IEEE 802.15.4. In the thesis, experience stories of eDIANA and Centrale Adriatica Projects and a mathematical analysis of concurrent transmissions starting from O-QPSK chip demodulation to the packet reception rate with several different types of theoretical demodulators, are presented. There is a very good agreement between the measurements so far in the literature and the mathematical analysis.
Resumo:
Coordinated patterns of electrical activity are important for the early development of sensory systems. The spatiotemporal dynamics of these early activity patterns and the role of the peripheral sensory input for their generation are essentially unknown. There are two projects in this thesis. In project1, we performed extracellular multielectrode recordings in the somatosensory cortex of postnatal day 0 to 7 rats in vivo and observed three distinct patterns of synchronized oscillatory activity. (1) Spontaneous and periphery-driven spindle bursts of 1–2 s in duration and ~10 Hz in frequency occurred approximately every 10 s. (2) Spontaneous and sensory-driven gamma oscillations of 150–300 ms duration and 30–40 Hz in frequency occurred every 10–30 s. (3) Long oscillations appeared only every ~20 min and revealed the largest amplitude (250–750 µV) and longest duration (>40 s). These three distinct patterns of early oscillatory activity differently synchronized the neonatal cortical network. Whereas spindle bursts and gamma oscillations did not propagate and synchronized a local neuronal network of 200–400 µm in diameter, long oscillations propagated with 25–30 µm/s and synchronized 600-800 µm large ensembles. All three activity patterns were triggered by sensory activation. Single electrical stimulation of the whisker pad or tactile whisker activation elicited neocortical spindle bursts and gamma activity. Long oscillations could be only evoked by repetitive sensory stimulation. The neonatal oscillatory patterns in vivo depended on NMDAreceptor-mediated synaptic transmission and gap junctional coupling. Whereas spindle bursts and gamma oscillations may represent an early functional columnar-like pattern, long oscillations may serve as a propagating activation signal consolidating these immature neuronal networks. In project2, Using voltage-sensitive dye imaging and simultaneous multi-channel extracellular recordings in the barrel cortex and somatosensory thalamus of newborn rats in vivo, we found that spontaneous and whisker stimulation induced activity patterns were restricted to functional cortical columns already at the day of birth. Spontaneous and stimulus evoked cortical activity consisted of gamma oscillations followed by spindle bursts. Spontaneous events were mainly generated in the thalamus or by spontaneous whisker movements. Our findings indicate that during early developmental stages cortical networks self-organize in ontogenetic columns via spontaneous gamma oscillations triggered by the thalamus or sensory periphery.