18 resultados para object orientation processing
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.
Resumo:
The population of space debris increased drastically during the last years. Collisions involving massive objects may produce large number of fragments leading to significantly growth of the space debris population. An effective remediation measure in order to stabilize the population in LEO, is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period and spin axis orientation. If we observe a rotating object, the observer sees different surface areas of the object which leads to changes in the measured intensity. Rotating objects will produce periodic brightness vari ations with frequencies which are related to the spin periods. Photometric monitoring is the real tool for remote diagnostics of the satellite rotation around its center of mass. This information is also useful, for example, in case of contingency. Moreover, it is also important to take into account the orientation of non-spherical body (e.g. space debris) in the numerical integration of its motion when a close approach with the another spacecr aft is predicted. We introduce the two databases of light curves: the AIUB data base, which contains about a thousand light curves of LEO, MEO and high-altitude debris objects (including a few functional objects) obtained over more than seven years, and the data base of the Astronomical Observatory of Odessa University (Ukraine), which contains the results of more than 10 years of photometric monitoring of functioning satellites and large space debris objects in low Earth orbit. AIUB used its 1m ZIMLAT telescope for all light curves. For tracking low-orbit satellites, the Astronomical Observatory of Odessa used the KT-50 telescope, which has an alt-azimuth mount and allows tracking objects moving at a high angular velocity. The diameter of the KT-50 main mirror is 0.5 m, and the focal length is 3 m. The Odessa's Atlas of light curves includes almost 5,5 thousand light curves for ~500 correlated objects from a time period of 2005-2014. The processing of light curves and the determination of the rotation period in the inertial frame is challenging. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for confirmation, will be presented. The rotation of the Envisat satellite after its sudden failure will be analyzed. The deceleration of its rotation rate within 3 years is studied together with the attempt to determine the orientation of the rotation axis.
Resumo:
Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information.