993 resultados para Visual Pigments
Resumo:
Changing the ratio of light-harvesting pigments was regarded as an efficient way to improve the photosynthesis rate in microalgae, but the underlying mechanism is still unclear. In the present study, a mutant of Anabeana simensis (called SP) was selected from retrieved satellite cultures. Several parameters related with photosynthesis, such as the growth, photosynthesis rate, the content of photosynthetic pigment, low temperature fluorescence spectrum (77K) and electron transport rate, were compared with those of the wild type. It was found that the change in the ratio of light-harvesting pigments in the mutant led to more efficient light energy transfer and usage in mutant than in the wild type. This may be the reason why the mutant had higher photosynthesis and growth rates.
Resumo:
Effects of solar ultraviolet radiation (UVR) on Spirulina platensis were studied by investigating its photochemical efficiency, photosynthetic pigments and biomass production while exposed to full spectrum solar radiation or depleted of UVR for understanding how and to what extent UVR influences its photosynthetic physiology and production. It was found that UVR brought about an extra inhibition of photochemical efficiency by 26%-30%. The greatest inhibition of photochemical efficiency in S. platensis was observed at noontime, and then recovered to some extent in late afternoon no matter which treatment they were exposed to. The contents of chlorophyll a, phycocyanin and carotenoids increased during initial stage of the exposure, but decreased with elongated exposure. UVR decreased the biomass yield by about 6%. It indicated that filtering out UVR of solar radiation would raise the productivity of S. platensis, which is an important factor that should be considered in the production.
Resumo:
Cypermethrin is a synthetic pyrethroid that is particularly toxic to crustaceans. It is therefore applied as a chemotherapeutant in farms for the treatment of pests. The effective concentrations of cypermethrin on the inhibition of Scenedesmus ohliquus growth at 96h (96h EC50) were determined to be 50, 100, 150, 200, and 250mg/L. Algal growth, pigment fractions, and the activity of superoxide dismutase (SOD) in the algal cells were measured in the exponential phase after exposure to cypermethrin. The results show that higher concentration of cypermethrin is inhibitory for growth and other metabolic activities and the 96h EC50 of cypermethrin to S. ohliquus is 112 +/- 9 mg/L; the potential application of SOD activity in S. ohliquus as a sensitive biomarker for cypermethrin exposure is also discussed. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Experimental research in biology has uncovered a number of different ways in which flying insects use cues derived from optical flow for navigational purposes, such as safe landing, obstacle avoidance and dead reckoning. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of bees. Specifically, we focus on the mechanisms of course stabilization behavior and visually mediated odometer by using a biological model of motion detector for the purpose of long-range goal-directed navigation in 3D environment. The performance tests of the proposed navigation method are conducted by using a blimp-type flying robot platform in uncontrolled indoor environments. The result shows that the proposed mechanism can be used for goal-directed navigation. Further analysis is also conducted in order to enhance the navigation performance of autonomous aerial vehicles. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Recently, sonar signals and other sounds produced by cetaceans have been used for acoustic detection of individuals and groups in the wild. However, the detection probability ascertained by concomitant visual survey has not been demonstrated extensively. The finless porpoises (Neophocaena phocaenoides) have narrow band and high-frequency sonar signals, which are distinctive from background noises. Underwater sound monitoring with hydrophones (B&K8103) placed along the sides of a research vessel, concurrent with visual observations was conducted in the Yangtze River from Wuhan to Poyang Lake in 1998 in China. The peak to peak detection threshold was set at 133 dB re 1 mu Pa. With this threshold level, porpoises could be detected reliably within 300 m of the hydrophone. In a total of 774-km cruise, 588 finless porpoises were sighted by visual observation and 44 864 ultrasonic pulses were recorded by the acoustical observation system. The acoustic monitoring system could detect the presence of the finless porpoises 82% of the time. A false alarm in the system occurred with a frequency of 0.9%. The high-frequency acoustical observation is suggested as an effective method for field surveys of small cetaceans, which produce high-frequency sonar signals. (C) 2001 Acoustical Society of America.
Resumo:
Maps of surface chlorophyllous pigment (Chl a + Pheo a) are currently produced from ocean color sensors. Transforming such maps into maps of primary production can be reliably done only by using light-production models in conjuction with additional information about the column-integrated pigment content and its vertical distribution. As a preliminary effort in this direction. $\ticksim 4,000$ vertical profiles pigment (Chl a + Pheo a) determined only in oceanic Case 1 waters have been statistically analyzed. They were scaled according to dimensionless depths (actual depth divided by the depth of the euphotic layer, $Z_e$) and expressed as dimensionless concentrations (actual concentration divided by the mean concentration within the euphotic layer). The depth $Z_e$ generally unknown, was computed with a previously develop bio-optical model. Highly sifnificant relationships were found allowing $\langle C \rangle_tot$, the pigment content of the euphotic layer, to be inferred from the surface concentration, $\bar C_pd$, observed within the layer of one penetration depth. According to their $\bar C_pd$ values (ranging from $0.01 to > 10 mg m^-3$), we categorized the profiles into seven trophic situations and computed a mean vertical profile for each. Between a quasi-uniform profile in eutrophic waters and a profile with a strong deep maximum in oligotrophic waters, the shape evolves rather regularly. The wellmixed cold waters, essentially in the Antarctic zone, have been separately examined. On average, their profiles are featureless, without deep maxima, whatever their trophic state. Averaged values their profiles are featureless, without deep maxima, whatever their trophic state. Averaged values their profiles are featureless, without deep maxima, whatever their trophic state. Averaged values of $ρ$, the ratio of Chl a tp (Chl a + Pheo a), have also been obtained for each trophic category. The energy stored by photosynthesizing algae, once normalized with respect to the integrated chlorophyll biomass $\langle C \rangle _tot $ is proportional to the available photosythetic energy at the surface via a parameter $ψ∗$ which is the cross-section for photosynthesis per unit of areal chlorophyll. By tanking advantage of the relative stability of $ψ∗.$ we can compute primary production from ocean color data acquired from space. For such a computation, inputs are the irradiance field at the ocean surface, the "surface" pigment from which $\langle C \rangle _tot$ can be derived, the mean $ρ value pertinent to the trophic situation as depicted by the $\bar C_pd or $\langle C \rangle _tot$ values, and the cross-section $ψ∗$. Instead of a contant $ψ∗.$ value, the mean profiles can be used; they allow the climatological field of the $ψ∗.$ parameter to be adjusted through the parallel use of a spectral light-production model.
Resumo:
The distinguishment between the object appearance and the background is the useful cues available for visual tracking in which the discriminant analysis is widely applied However due to the diversity of the background observation there are not adequate negative samples from the background which usually lead the discriminant method to tracking failure Thus a natural solution is to construct an object-background pair constrained by the spatial structure which could not only reduce the neg-sample number but also make full use of the background information surrounding the object However this Idea is threatened by the variant of both the object appearance and the spatial-constrained background observation especially when the background shifts as the moving of the object Thus an Incremental pairwise discriminant subspace is constructed in this paper to delineate the variant of the distinguishment In order to maintain the correct the ability of correctly describing the subspace we enforce two novel constraints for the optimal adaptation (1) pairwise data discriminant constraint and (2) subspace smoothness The experimental results demonstrate that the proposed approach can alleviate adaptation drift and achieve better visual tracking results for a large variety of nonstationary scenes (C) 2010 Elsevier B V All rights reserved
Resumo:
It is important for practical application to design an effective and efficient metric for video quality. The most reliable way is by subjective evaluation. Thus, to design an objective metric by simulating human visual system (HVS) is quite reasonable and available. In this paper, the video quality assessment metric based on visual perception is proposed. Three-dimensional wavelet is utilized to decompose video and then extract features to mimic the multichannel structure of HVS. Spatio-temporal contrast sensitivity function (S-T CSF) is employed to weight coefficient obtained by three-dimensional wavelet to simulate nonlinearity feature of the human eyes. Perceptual threshold is exploited to obtain visual sensitive coefficients after S-T CSF filtered. Visual sensitive coefficients are normalized representation and then visual sensitive errors are calculated between reference and distorted video. Finally, temporal perceptual mechanism is applied to count values of video quality for reducing computational cost. Experimental results prove the proposed method outperforms the most existing methods and is comparable to LHS and PVQM.
Resumo:
Both commercial and scientific applications often need to transform color images into gray-scale images, e. g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well-defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.
Resumo:
Eye detection plays an important role in many practical applications. This paper presents a novel two-step scheme for eye detection. The first step models an eye by a newly defined visual-context pattern (VCP), and the second step applies semisupervised boosting for precise detection. VCP describes both the space and appearance relations between an eye region (region of eye) and a reference region (region of reference). The context feature of a VCP is extracted by using the integral image. Aiming to reduce the human labeling efforts, we apply semisupervised boosting, which integrates the context feature and the Haar-like features for precise eye detection. Experimental results on several standard face data sets demonstrate that the proposed approach is effective, robust, and efficient. We finally show that this approach is ready for practical applications.