870 resultados para Peak-to-average Ratio (par)
Resumo:
Recent decreases in costs, and improvements in performance, of silicon array detectors open a range of potential applications of relevance to plant physiologists, associated with spectral analysis in the visible and short-wave near infra-red (far-red) spectrum. The performance characteristics of three commercially available ‘miniature’ spectrometers based on silicon array detectors operating in the 650–1050-nm spectral region (MMS1 from Zeiss, S2000 from Ocean Optics, and FICS from Oriel, operated with a Larry detector) were compared with respect to the application of non-invasive prediction of sugar content of fruit using near infra-red spectroscopy (NIRS). The FICS–Larry gave the best wavelength resolution; however, the narrow slit and small pixel size of the charge-coupled device detector resulted in a very low sensitivity, and this instrumentation was not considered further. Wavelength resolution was poor with the MMS1 relative to the S2000 (e.g. full width at half maximum of the 912 nm Hg peak, 13 and 2 nm for the MMS1 and S2000, respectively), but the large pixel height of the array used in the MMS1 gave it sensitivity comparable to the S2000. The signal-to-signal standard error ratio of spectra was greater by an order of magnitude with the MMS1, relative to the S2000, at both near saturation and low light levels. Calibrations were developed using reflectance spectra of filter paper soaked in range of concentrations (0–20% w/v) of sucrose, using a modified partial least squares procedure. Calibrations developed with the MMS1 were superior to those developed using the S2000 (e.g. coefficient of correlation of 0.90 and 0.62, and standard error of cross-validation of 1.9 and 5.4%, respectively), indicating the importance of high signal to noise ratio over wavelength resolution to calibration accuracy. The design of a bench top assembly using the MMS1 for the non-invasive assessment of mesocarp sugar content of (intact) melon fruit is reported in terms of light source and angle between detector and light source, and optimisation of math treatment (derivative condition and smoothing function).
Resumo:
Background: Deviated nasal septum (DNS) is one of the major causes of nasal obstruction. Polyvinylidene fluoride (PVDF) nasal sensor is the new technique developed to assess the nasal obstruction caused by DNS. This study evaluates the PVDF nasal sensor measurements in comparison with PEAK nasal inspiratory flow (PNIF) measurements and visual analog scale (VAS) of nasal obstruction. Methods: Because of piezoelectric property, two PVDF nasal sensors provide output voltage signals corresponding to the right and left nostril when they are subjected to nasal airflow. The peak-to-peak amplitude of the voltage signal corresponding to nasal airflow was analyzed to assess the nasal obstruction. PVDF nasal sensor and PNIF were performed on 30 healthy subjects and 30 DNS patients. Receiver operating characteristic was used to analyze the DNS of these two methods. Results: Measurements of PVDF nasal sensor strongly correlated with findings of PNIF (r = 0.67; p < 0.01) in DNS patients. A significant difference (p < 0.001) was observed between PVDF nasal sensor measurements and PNIF measurements of the DNS and the control group. A cutoff between normal and pathological of 0.51 Vp-p for PVDF nasal sensor and 120 L/min for PNIF was calculated. No significant difference in terms of sensitivity of PVDF nasal sensor and PNIF (89.7% versus 82.6%) and specificity (80.5% versus 78.8%) was calculated. Conclusion: The result shows that PVDF measurements closely agree with PNIF findings. Developed PVDF nasal sensor is an objective method that is simple, inexpensive, fast, and portable for determining DNS in clinical practice.
Resumo:
The authors have developed the method used by Pianet and Le Hir (Doc.Sci.Cent. ORSTOM Pointe-Noire, 17, 1971) for the study of albacore (Thunnus albacares) in the Pointe-Noire region. The method is based on the fact that the ratio between unit of effort and number of fish for two fishing gears is equal to the ratio of their catchability coefficients.
Resumo:
In the present study, the quality of post-thaw sperm of red seabream Pagrus major frozen with 6-24% DMSO was investigated. The motility, average path velocity and fertilizing capacity of fresh and their corresponding post-thaw sperm were examined for evaluation of the post-thaw sperm motion characteristics and its association with fertilizing capacity. An analysis of sperm motility before and after cryopreservation has been performed using computer-assisted sperm analysis (CASA). For post-thaw sperm frozen with 12-21% DMSO, the percentages of motile sperm were not significantly (P > 0.05) changed 10 s after activation. Moreover, the main motility pattern and swimming velocity of the motile post-thaw sperm were not significantly (P > 0.05) changed and the progressive linear motion was still the dominant pattern. However, the total motility of post-thaw sperm (72.3 +/- 6.3%) 30 s after activation was (P < 0.05) lower than the corresponding fresh sperm (82.7 +/- 7.2%). Additionally, the fertilizing capacity of post-thaw sperm was investigated with a standardized sperm to egg ratio 500:1. There is a linear regression relationship between the percentage of motile post-thaw sperm and fertilizing capability. These data demonstrate that 12-21% DMSO can provide good protection to the sperm during the freezing-thawing process. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.
Resumo:
Attempts were made to measure the fraction of elemental carbon (EC) in ultrafine aerosol by modifying an Ambient Carbonaceous Particulate Monitor (ACPM, R&P 5400). The main modification consisted in placing a quartz filter in one of the sampling lines of this dual-channel instrument. With the filter all aerosol and EC contained in it is collected, while in the other line of the instrument the standard impactor samples only particles larger than 0.14 μm. The fraction of EC in particles smaller than 0.14 μm is derived from the difference in concentration as measured via the two sampling lines. Measurements with the modified instrument were made at a suburban site in Amsterdam, The Netherlands. An apparent adsorption artefact, which could not be eliminated by the use of denuders, precluded meaningful evaluation of the data for total carbon. Blanks in the measurements of EC were negligible and the EC data were hence further evaluated. We found that the concentration of EC obtained via the channel with the impactor was systematically lower than that in the filter-line. The average ratio of the concentrations was close to 0.6, which indicates that approximately 40% of the EC was in particles smaller than 0.14 μm. Alternative explanations for the difference in the concentration in the two sampling lines could be excluded, such as a difference in the extent of oxidation. This should be a function of loading, which is not the case. Another reason for the difference could be that less material is collected by the impactor due to rebound, but such bounce of aerosol is very unlikely in The Netherlands due to co-deposition of abundant deliquesced and thus viscous ammonium compounds. The conclusion is that a further modification to assess the true fraction of ultrafine EC, by installing an impactor with cut-off diameter at 0.1 μm, would be worth pursuing. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
The ability of tissue engineered constructs to replace diseased or damaged organs is limited without the incorporation of a functional vascular system. To design microvasculature that recapitulates the vascular niche functions for each tissue in the body, we investigated the following hypotheses: (1) cocultures of human umbilical cord blood-derived endothelial progenitor cells (hCB-EPCs) with mural cells can produce the microenvironmental cues necessary to support physiological microvessel formation in vitro; (2) poly(ethylene glycol) (PEG) hydrogel systems can support 3D microvessel formation by hCB-EPCs in coculture with mural cells; (3) mesenchymal cells, derived from either umbilical cord blood (MPCs) or bone marrow (MSCs), can serve as mural cells upon coculture with hCB-EPCs. Coculture ratios between 0.2 (16,000 cells/cm2) and 0.6 (48,000 cells/cm2) of hCB-EPCs plated upon 3.3 µg/ml of fibronectin-coated tissue culture plastic with (80,000 cells/cm2) of human aortic smooth muscle cells (SMCs), results in robust microvessel structures observable for several weeks in vitro. Endothelial basal media (EBM-2, Lonza) with 9% v/v fetal bovine serum (FBS) could support viability of both hCB-EPCs and SMCs. Coculture spatial arrangement of hCB-EPCs and SMCs significantly affected network formation with mixed systems showing greater connectivity and increased solution levels of angiogenic cytokines than lamellar systems. We extended this model into a 3D system by encapsulation of a 1 to 1 ratio of hCB-EPC and SMCs (30,000 cells/µl) within hydrogels of PEG-conjugated RGDS adhesive peptide (3.5 mM) and PEG-conjugated protease sensitive peptide (6 mM). Robust hCB-EPC microvessels formed within the gel with invasion up to 150 µm depths and parameters of total tubule length (12 mm/mm2), branch points (127/mm2), and average tubule thickness (27 µm). 3D hCB-EPC microvessels showed quiescence of hCB-EPCs (<1% proliferating cells), lumen formation, expression of EC proteins connexin 32 and VE-cadherin, eNOS, basement membrane formation by collagen IV and laminin, and perivascular investment of PDGFR-β+/α-SMA+ cells. MPCs present in <15% of isolations displayed >98% expression for mural markers PDGFR-β, α-SMA, NG2 and supported hCB-EPC by day 14 of coculture with total tubule lengths near 12 mm/mm2. hCB-EPCs cocultured with MSCs underwent cell loss by day 10 with a 4-fold reduction in CD31/PECAM+ cells, in comparison to controls of hCB-EPCs in SMC coculture. Changing the coculture media to endothelial growth media (EBM-2 + 2% v/v FBS + EGM-2 supplement containing VEGF, FGF-2, EGF, hydrocortisone, IGF-1, ascorbic acid, and heparin), promoted stable hCB-EPC network formation in MSC cocultures over 2 weeks in vitro, with total segment length per image area of 9 mm/mm2. Taken together, these findings demonstrate a tissue engineered system that can be utilized to evaluate vascular progenitor cells for angiogenic therapies.
Resumo:
We report results on the performance of a free-electron laser operating at a wavelength of 13.7 nm where unprecedented peak and average powers for a coherent extreme-ultraviolet radiation source have been measured. In the saturation regime, the peak energy approached 170 J for individual pulses, and the average energy per pulse reached 70 J. The pulse duration was in the region of 10 fs, and peak powers of 10 GW were achieved. At a pulse repetition frequency of 700 pulses per second, the average extreme-ultraviolet power reached 20 mW. The output beam also contained a significant contribution from odd harmonics of approximately 0.6% and 0.03% for the 3rd (4.6 nm) and the 5th (2.75 nm) harmonics, respectively. At 2.75 nm the 5th harmonic of the radiation reaches deep into the water window, a wavelength range that is crucially important for the investigation of biological samples.
Resumo:
This paper investigated the influence of three micro electrodischarge milling process parameters, which were feed rate, capacitance, and voltage. The response variables were average surface roughness (R a ), maximum peak-to-valley roughness height (R y ), tool wear ratio (TWR), and material removal rate (MRR). Statistical models of these output responses were developed using three-level full factorial design of experiment. The developed models were used for multiple-response optimization by desirability function approach to obtain minimum R a , R y , TWR, and maximum MRR. Maximum desirability was found to be 88%. The optimized values of R a , R y , TWR, and MRR were 0.04, 0.34 μm, 0.044, and 0.08 mg min−1, respectively for 4.79 μm s−1 feed rate, 0.1 nF capacitance, and 80 V voltage. Optimized machining parameters were used in verification experiments, where the responses were found very close to the predicted values.
Resumo:
Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.
Resumo:
Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.
Resumo:
In this paper we present a design methodology for algorithm/architecture co-design of a voltage-scalable, process variation aware motion estimator based on significance driven computation. The fundamental premise of our approach lies in the fact that all computations are not equally significant in shaping the output response of video systems. We use a statistical technique to intelligently identify these significant/not-so-significant computations at the algorithmic level and subsequently change the underlying architecture such that the significant computations are computed in an error free manner under voltage over-scaling. Furthermore, our design includes an adaptive quality compensation (AQC) block which "tunes" the algorithm and architecture depending on the magnitude of voltage over-scaling and severity of process variations. Simulation results show average power savings of similar to 33% for the proposed architecture when compared to conventional implementation in the 90 nm CMOS technology. The maximum output quality loss in terms of Peak Signal to Noise Ratio (PSNR) was similar to 1 dB without incurring any throughput penalty.
Resumo:
Différentes méthodes ayant pour objectif une utilisation optimale d'antennes radio-fréquences spécialisées en imagerie par résonance magnétique sont développées et validées. Dans un premier temps, il est démontré qu'une méthode alternative de combinaison des signaux provenant des différents canaux de réception d'un réseau d'antennes mène à une réduction significative du biais causé par la présence de bruit dans des images de diffusion, en comparaison avec la méthode de la somme-des-carrés généralement utilisée. Cette réduction du biais engendré par le bruit permet une amélioration de l'exactitude de l'estimation de différents paramètres de diffusion et de diffusion tensorielle. De plus, il est démontré que cette méthode peut être utilisée conjointement avec une acquisition régulière sans accélération, mais également en présence d'imagerie parallèle. Dans une seconde perspective, les bénéfices engendrés par l'utilisation d'une antenne d'imagerie intravasculaire sont étudiés. Suite à une étude sur fantôme, il est démontré que l'imagerie par résonance magnétique intravasculaire offre le potentiel d'améliorer significativement l'exactitude géométrique lors de mesures morphologiques vasculaires, en comparaison avec les résultats obtenus avec des antennes de surface classiques. Il est illustré qu'une exactitude géométrique comparable à celle obtenue grâce à une sonde ultrasonique intravasculaire peut être atteinte. De plus, plusieurs protocoles basés sur une acquisition de type balanced steady-state free-precession sont comparés dans le but de mettre en évidence différentes relations entre les paramètres utilisés et l'exactitude géométrique obtenue. En particulier, des dépendances entre la taille du vaisseau, le rapport signal-sur-bruit à la paroi vasculaire, la résolution spatiale et l'exactitude géométrique atteinte sont mises en évidence. Dans une même optique, il est illustré que l'utilisation d'une antenne intravasculaire permet une amélioration notable de la visualisation de la lumière d'une endoprothèse vasculaire. Lorsque utilisée conjointement avec une séquence de type balanced steady-state free-precession utilisant un angle de basculement spécialement sélectionné, l'imagerie par résonance magnétique intravasculaire permet d'éliminer complètement les limitations normalement engendrées par l'effet de blindage radio-fréquence de l'endoprothèse.
Resumo:
Cette thèse porte sur l’amélioration des techniques d’imagerie à haut-contraste permettant la détection directe de compagnons à de faibles séparations de leur étoile hôte. Plus précisément, elle s’inscrit dans le développement du Gemini Planet Imager (GPI) qui est un instrument de deuxième génération pour les télescopes Gemini. Cette caméra utilisera un spectromètre à champ intégral (SCI) pour caractériser les compagnons détectés et pour réduire le bruit de tavelure limitant leur détection et corrigera la turbulence atmosphérique à un niveau encore jamais atteint en utilisant deux miroirs déformables dans son système d’optique adaptative (OA) : le woofer et le tweeter. Le woofer corrigera les aberrations de basses fréquences spatiales et de grandes amplitudes alors que le tweeter compensera les aberrations de plus hautes fréquences ayant une plus faible amplitude. Dans un premier temps, les performances pouvant être atteintes à l’aide des SCIs présentement en fonction sur les télescopes de 8-10 m sont investiguées en observant le compagnon de l’étoile GQ Lup à l’aide du SCI NIFS et du système OA ALTAIR installés sur le télescope Gemini Nord. La technique de l’imagerie différentielle angulaire (IDA) est utilisée pour atténuer le bruit de tavelure d’un facteur 2 à 6. Les spectres obtenus en bandes JHK ont été utilisés pour contraindre la masse du compagnon par comparaison avec les prédictions des modèles atmosphériques et évolutifs à 8−60 MJup, où MJup représente la masse de Jupiter. Ainsi, il est déterminé qu’il s’agit plus probablement d’une naine brune que d’une planète. Comme les SCIs présentement en fonction sont des caméras polyvalentes pouvant être utilisées pour plusieurs domaines de l’astrophysique, leur conception n’a pas été optimisée pour l’imagerie à haut-contraste. Ainsi, la deuxième étape de cette thèse a consisté à concevoir et tester en laboratoire un prototype de SCI optimisé pour cette tâche. Quatre algorithmes de suppression du bruit de tavelure ont été testés sur les données obtenues : la simple différence, la double différence, la déconvolution spectrale ainsi qu’un nouvel algorithme développé au sein de cette thèse baptisé l’algorithme des spectres jumeaux. Nous trouvons que l’algorithme des spectres jumeaux est le plus performant pour les deux types de compagnons testés : les compagnons méthaniques et non-méthaniques. Le rapport signal-sur-bruit de la détection a été amélioré d’un facteur allant jusqu’à 14 pour un compagnon méthanique et d’un facteur 2 pour un compagnon non-méthanique. Dernièrement, nous nous intéressons à certains problèmes liés à la séparation de la commande entre deux miroirs déformables dans le système OA de GPI. Nous présentons tout d’abord une méthode utilisant des calculs analytiques et des simulations Monte Carlo pour déterminer les paramètres clés du woofer tels que son diamètre, son nombre d’éléments actifs et leur course qui ont ensuite eu des répercussions sur le design général de l’instrument. Ensuite, le système étudié utilisant un reconstructeur de Fourier, nous proposons de séparer la commande entre les deux miroirs dans l’espace de Fourier et de limiter les modes transférés au woofer à ceux qu’il peut précisément reproduire. Dans le contexte de GPI, ceci permet de remplacer deux matrices de 1600×69 éléments nécessaires pour une séparation “classique” de la commande par une seule de 45×69 composantes et ainsi d’utiliser un processeur prêt à être utilisé plutôt qu’une architecture informatique plus complexe.
Resumo:
Le but de cette thèse est de raffiner et de mieux comprendre l'utilisation de la méthode spectroscopique, qui compare des spectres visibles de naines blanches à atmosphère riche en hydrogène (DA) à des spectres synthétiques pour en déterminer les paramètres atmosphériques (température effective et gravité de surface). Notre approche repose principalement sur le développement de modèles de spectres améliorés, qui proviennent eux-mêmes de modèles d'atmosphère de naines blanches de type DA. Nous présentons une nouvelle grille de spectres synthétiques de DA avec la première implémentation cohérente de la théorie du gaz non-idéal de Hummer & Mihalas et de la théorie unifiée de l'élargissement Stark de Vidal, Cooper & Smith. Cela permet un traitement adéquat du chevauchement des raies de la série de Balmer, sans la nécessité d'un paramètre libre. Nous montrons que ces spectres améliorés prédisent des gravités de surface qui sont plus stables en fonction de la température effective. Nous étudions ensuite le problème de longue date des gravités élevées pour les DA froides. L'hypothèse de Bergeron et al., selon laquelle les atmosphères sont contaminées par de l'hélium, est confrontée aux observations. À l'aide de spectres haute résolution récoltés au télescope Keck à Hawaii, nous trouvons des limites supérieures sur la quantité d'hélium dans les atmosphères de près de 10 fois moindres que celles requises par le scénario de Bergeron et al. La grille de spectres conçue dans ces travaux est ensuite appliquée à une nouvelle analyse spectroscopique de l'échantillon de DA du SDSS. Notre approche minutieuse permet de définir un échantillon plus propre et d'identifier un nombre important de naines blanches binaires. Nous déterminons qu'une coupure à un rapport signal-sur-bruit S/N > 15 optimise la grandeur et la qualité de l'échantillon pour calculer la masse moyenne, pour laquelle nous trouvons une valeur de 0.613 masse solaire. Finalement, huit nouveaux modèles 3D de naines blanches utilisant un traitement d'hydrodynamique radiative de la convection sont présentés. Nous avons également calculé des modèles avec la même physique, mais avec une traitement standard 1D de la convection avec la théorie de la longueur de mélange. Un analyse différentielle entre ces deux séries de modèles montre que les modèles 3D prédisent des gravités considérablement plus basses. Nous concluons que le problème des gravités élevées dans les naines blanches DA froides est fort probablement causé par une faiblesse dans la théorie de la longueur de mélange.