896 resultados para Objects in art
Resumo:
A detailed study was performed for a sample of low-mass pre-main-sequence (PMS) stars, previously identified as weak-line T Tauri stars, which are compared to members of the Tucanae and Horologium Associations. Aiming to verify if there is any pattern of abundances when comparing the young stars at different phases, we selected objects in the range from 1 to 100 Myr, which covers most of PMS evolution. High-resolution optical spectra were acquired at European Southern Observatory and Observatorio do Pico dos Dias. The stellar fundamental parameters effective temperature and gravity were calculated by excitation and ionization equilibria of iron absorption lines. Chemical abundances were obtained via equivalent width calculations and spectral synthesis for 44 per cent of the sample, which shows metallicities within 0.5 dex solar. A classification was developed based on equivalent width of Li I 6708 angstrom and Ha lines and spectral types of the studied stars. This classification allowed a separation of the sample into categories that correspond to different evolutive stages in the PMS. The position of these stars in the Hertzsprung-Russell diagram was also inspected in order to estimate their ages and masses. Among the studied objects, it was verified that our sample actually contains seven weak-line T Tauri stars, three are Classical T Tauri, 12 are Fe/Ge PMS stars and 21 are post-T Tauri or young main-sequence stars. An estimation of circumstellar luminosity was obtained using a disc model to reproduce the observed spectral energy distribution. Most of the stars show low levels of circumstellar emission, corresponding to less than 30 per cent of the total emission.
Resumo:
We present the discovery of a wide (67 AU) substellar companion to the nearby (21 pc) young solar-metallicity M1 dwarf CD-35 2722, a member of the approximate to 100 Myr AB Doradus association. Two epochs of astrometry from the NICI Planet-Finding Campaign confirm that CD-35 2722 B is physically associated with the primary star. Near-IR spectra indicate a spectral type of L4 +/- 1 with a moderately low surface gravity, making it one of the coolest young companions found to date. The absorption lines and near-IR continuum shape of CD-35 2722 B agree especially well the dusty field L4.5 dwarf 2MASS J22244381-0158521, while the near-IR colors and absolute magnitudes match those of the 5 Myr old L4 planetary-mass companion, 1RXS J160929.1-210524 b. Overall, CD-35 2722 B appears to be an intermediate-age benchmark for L dwarfs, with a less peaked H-band continuum than the youngest objects and near-IR absorption lines comparable to field objects. We fit Ames-Dusty model atmospheres to the near-IR spectra and find T(eff) = 1700-1900 K and log(g) = 4.5 +/- 0.5. The spectra also show that the radial velocities of components A and B agree to within +/- 10 km s(-1), further confirming their physical association. Using the age and bolometric luminosity of CD-35 2722 B, we derive a mass of 31 +/- 8 M(Jup) from the Lyon/Dusty evolutionary models. Altogether, young late-M to mid-L type companions appear to be overluminous for their near-IR spectral type compared with field objects, in contrast to the underluminosity of young late-L and early-T dwarfs.
Resumo:
Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e. g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework`s applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.
Resumo:
Biological systems have facility to capture salient object(s) in a given scene, but it is still a difficult task to be accomplished by artificial vision systems. In this paper a visual selection mechanism based on the integrate and fire neural network is proposed. The model not only can discriminate objects in a given visual scene, but also can deliver focus of attention to the salient object. Moreover, it processes a combination of relevant features of an input scene, such as intensity, color, orientation, and the contrast of them. In comparison to other visual selection approaches, this model presents several interesting features. It is able to capture attention of objects in complex forms, including those linearly nonseparable. Moreover, computer simulations show that the model produces results similar to those observed in natural vision systems.
Resumo:
This work presents a novel approach in order to increase the recognition power of Multiscale Fractal Dimension (MFD) techniques, when applied to image classification. The proposal uses Functional Data Analysis (FDA) with the aim of enhancing the MFD technique precision achieving a more representative descriptors vector, capable of recognizing and characterizing more precisely objects in an image. FDA is applied to signatures extracted by using the Bouligand-Minkowsky MFD technique in the generation of a descriptors vector from them. For the evaluation of the obtained improvement, an experiment using two datasets of objects was carried out. A dataset was used of characters shapes (26 characters of the Latin alphabet) carrying different levels of controlled noise and a dataset of fish images contours. A comparison with the use of the well-known methods of Fourier and wavelets descriptors was performed with the aim of verifying the performance of FDA method. The descriptor vectors were submitted to Linear Discriminant Analysis (LDA) classification method and we compared the correctness rate in the classification process among the descriptors methods. The results demonstrate that FDA overcomes the literature methods (Fourier and wavelets) in the processing of information extracted from the MFD signature. In this way, the proposed method can be considered as an interesting choice for pattern recognition and image classification using fractal analysis.
Resumo:
In this paper, we present some results on the bounded derived category of Artin algebras, and in particular on the indecomposable objects in these categories, using homological properties. Given a complex X*, we consider the set J(X*) = {i is an element of Z vertical bar H(i)(X*) not equal 0} and we define the application l(X*) = maxJ(X*) - minJ(X*) + 1. We give relationships between some homological properties of an algebra and the respective application l. On the other hand, using homological properties again, we determine two subcategories of the bounded derived category of an algebra, which turn out to be the bounded derived categories of quasi-tilted algebras. As a consequence of these results we obtain new characterizations of quasi-tilted and quasi-tilted gentle algebras.
Resumo:
Objective: To evaluate the flexural strength, microleakage, and degree of conversion of a microhybrid resin polymerized with argon laser and halogen lamp. Method and Materials: For both flexural test and degree of conversion analysis, 5 bar samples of composite resin were prepared and polymerized according to ISO 4049. The halogen light-curing unit was used with 500 MW/cm(2) for 20 seconds and the argon laser with 250 mW for 10 and 20 seconds. Samples were stored in distilled water in a dark environment at 37 degrees C for 24 hours. The flexural property was quantified by a 3-point loading test. For the microleakage evaluation, 60 bovine incisors were used to prepare standardized Class 5 cavities, which were restored and polished. Specimens were stored in distilled water for 24 hours at 37 degrees C and thermocycled 500 times (6 degrees C to 60 degrees C). Specimens were then immersed in art aqueous solution of basic fuchsin for 24 hours. Longitudinal sections of each restoration were obtained and examined with a stereomicroscope for qualitative evaluation of microleakage. Fourier transform (FT)-Raman RFS 100/S spectrometer (Bruker) was used to analyze the degree of conversion. Results: ANOVA showed no statistically significant differences of flexural strength between the photoactivation types evaluated in the flexural study. Microleakage data were statistically analyzed by Mann-Whitney and Kruskal-Wallis tests. Enamel margins resulted in a statistically lower degree of leakage than dentin margins. No statistically significant difference was found among the 3 types of photocuring studied. ANOVA also showed no statistically significant difference in the degree of conversion among the studied groups. Conclusion: According to the methodology used in this research, the argon laser is a possible alternative for photocuring, providing the same quality of polymerization as the halogen lamp. None of the photocured units tested in this study completely eliminated microleakage.
Resumo:
This graduate study was assigned by Unisys Oy Ab. The purpose of this study was to find tools to monitor and manage servers and objects in a hosting environment and to remotely connect to the managed objects. Better solutions for promised services were also researched. Unisys provides a ServerHotel service to other businesses which do not have time or resources to manage their own network, servers or applications. Contracts are based on a Service Level Agreement where service level is agreed upon according to the customer's needs. These needs have created a demand for management tools. Unisys wanted to find the most appropriate tools for its hosting environment to fulfill the agreed service level with reasonable costs. The theory consists of literary research focusing on general agreements used in the Finnish IT business, different types of monitoring and management tools and the common protocols used inthem. The theory focuses mainly on the central elements of the above mentioned topics and on their positive and negative features. The second part of the study focuses on general hosting agreements and what management tools Unisys has selected for hosting and why. It also gives a more detailed account of the hosting environment and its features in more detail. Because of the results of the study Unisys decided to use Servers Alive to monitor network and MS applications’ services. Cacti was chosen to monitor disk spaces, which gives us an idea of future disk growth. For remote connections the Microsoft’s Remote Desktop tool was the mostappropriate when the connection was tunneled through Secure Shell (SSH). Finding proper tools for the intended purposes with cost-conscious financial resources proved challenging. This study showed that if required, it is possible to build a professional hosting environment.
Resumo:
This is a study conducted at, and for, the National Museum of History in Stockholm. The aim of the study was to confirm or disconfirm the hypothesis that visitors in a traditional museum environment might not take part in interactivity in an interactive exhibition. And if they do the visitors might skip the texts and objects on display. To answer this and other questions a multiple method was used. Both non participant observations and exit interviews were conducted. After a description of the interactive exhibits, theory of knowledge and learning is presented before the gathered data is presented. All together 443 visitors were observed. In the observations the visitors were timed on how much time they spent in the room, the time spent on the interactivity, texts and objects. In the 40 interviews information about visitors’ participation in the interactivity was gathered. What interactivity the visitor found easiest, hardest, funniest and most boring.The result did not confirm the hypothesis. All kinds of visitors, children and adults, participated in the interactivities. The visitors took part in the texts and objects and the interactive exhibits.
Resumo:
The aim of the present study was to investigate the effect of sensory modality on short-term memory recall. An exploratory, cross-sectional study was performed. A total of 119 individuals participated. There were 70 female and 49 male subjects, aged 4 to 80 years (M=34,3). The participants were presented with 12 different objects in auditory, visual or auditory/visual mode over a period of 24 seconds. The participants were then asked to recall as many of the 12 objects as possible in any order. The study took place at a day nursery, junior high schools, meetings with elderly and adults with house calls. Non-probability samples were used. The conclusion was that visual short-term memory generated the highest recollection and that adults had the highest mean on the different stimuli. A visual element is recommended at recollection.
Resumo:
The movement of graphics and audio programming towards three dimensions is to better simulate the way we experience our world. In this project I looked to use methods for coming closer to such simulation via realistic graphics and sound combined with a natural interface. I did most of my work on a Dell OptiPlex with an 800 MHz Pentium III processor and an NVIDlA GeForce 256 AGP Plus graphics accelerator -high end products in the consumer market as of April 2000. For graphics, I used OpenGL [1], an open·source, multi-platform set of graphics libraries that is relatively easy to use, coded in C . The basic engine I first put together was a system to place objects in a scene and to navigate around the scene in real time. Once I accomplished this, I was able to investigate specific techniques for making parts of a scene more appealing.