16 resultados para Visual search method
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.
Resumo:
OBJECTIVE: The aim of this study was to assess the subjective visual vertical in patients with bilateral vestibular dysfunction and to propose a new method to analyze subjective visual vertical data in these patients. METHODS: Static subjective visual vertical tests were performed in 40 subjects split into two groups. Group A consisted of 20 healthy volunteers, and Group B consisted of 20 patients with bilateral vestibular dysfunction. Each patient performed six measurements of the subjective visual vertical test, and the mean values were calculated and analyzed. RESULTS: Analyses of the numerical values of subjective visual vertical deviations (the conventional method of analysis) showed that the mean deviation was 0.326 +/- 1.13 degrees in Group A and 0.301 +/- 1.87 degrees in Group B. However, by analyzing the absolute values of the subjective visual vertical (the new method of analysis proposed), the mean deviation became 1.35 +/- 0.48 degrees in Group A and 2.152 +/- 0.93 degrees in Group B. The difference in subjective visual vertical deviations between groups was statistically significant (p < 0.05) only when the absolute values and the range of deviations were considered. CONCLUSION: An analysis of the absolute values of the subjective visual vertical more accurately reflected the visual vertical misperception in patients with bilateral vestibular dysfunction.
Resumo:
A new concept for in vitro visual evaluation of sun protection factor (SPF) of cosmetic formulations based on a supramolecular ultraviolet (UV) dosimeter was clearly demonstrated. The method closely parallels the method validated for in vivo evaluation and relies on the determination of the slightest perceptible bleaching of an iron-complex dye/nanocrystallinetitanium dioxide interface (UV dosimeter) in combination with an artificial skin substrate simulating the actual human skin in the presence and absence of a cosmetic formulation. The successful evaluation of SPF was ensured by the similarity of the erythema response of our dosimeter and human skin to UV light irradiation. A good linear correlation of in vitro and in vivo data up to SPF 40 confirmed the effectiveness of such a simple, cheap, and fast method. In short, here we unravel a convenient and accessible visual FPS evaluation method that can help improving the control on cosmetic products contributing to the reduction of skin cancer, one of the critical public health issues nowadays. (C) 2011 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 101:726732, 2012
Resumo:
Observations of cosmic rays arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Veron-Cetty Veron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energy resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three method can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. using data taken from January 1, 2004 to July 31, 2010 we examined the 20, 30, ... , 110 highest energy events with a corresponding minimum energy threshold of about 49.3 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
Little is known about the situational contexts in which individuals consume processed sources of dietary sugars. This study aimed to describe the situational contexts associated with the consumption of sweetened food and drink products in a Catholic Middle Eastern Canadian community. A two-stage exploratory sequential mixed-method design was employed with a rationale of triangulation. In stage 1 (n = 62), items and themes describing the situational contexts of sweetened food and drink product consumption were identified from semi-structured interviews and were used to develop the content for the Situational Context Instrument for Sweetened Product Consumption (SCISPC). Face validity, readability and cultural relevance of the instrument were assessed. In stage 2 (n = 192), a cross-sectional study was conducted and exploratory factor analysis was used to examine the structure of themes that emerged from the qualitative analysis as a means of furthering construct validation. The SCISPC reliability and predictive validity on the daily consumption of sweetened products were also assessed. In stage 1, six themes and 40-items describing the situational contexts of sweetened product consumption emerged from the qualitative analysis and were used to construct the first draft of the SCISPC. In stage 2, factor analysis enabled the clarification and/or expansion of the instrument's initial thematic structure. The revised SCISPC has seven factors and 31 items describing the situational contexts of sweetened product consumption. Initial validation of the instrument indicated it has excellent internal consistency and adequate test-retest reliability. Two factors of the SCISPC had predictive validity for the daily consumption of total sugar from sweetened products (Snacking and Energy demands) while the other factors (Socialization, Indulgence, Constraints, Visual Stimuli and Emotional needs) were rather associated to occasional consumption of these products.
Resumo:
Losartan is an antihypertensive agent that lost its patent protection in 2010, and, consequently, it has been available in generic form. The latter motivated the search for a rapid and precise alternative method. Here, a simple conductometric titration in aqueous medium is described for the losartan analysis in pharmaceutical formulations. The first step of the titration occurs with the protonation of losartan producing a white precipitate and resulting in a slow increase in conductivity. When the protonation stage is complete, a sharp increase in conductivity occurs which was determined to be due to the presence of excess of acid. The titrimetric method was applied to the determination of losartan in pharmaceutical products and the results are comparable with values obtained using a chromatographic method recommended by the United States Pharmacopoeia. The relative standard deviation for successive measurements of a 125 mg L-1 (2.71x10(-4) mol L-1) losartan solution was approximately 2%. Recovery study in tablet samples ranged between 99 and 102.4%. The procedure is fast, simple, and represents an attractive alternative for losartan quantification in routine analysis. In addition, it avoids organic solvents, minimizes the risk of exposure to the operator, and the waste treatment is easier compared to classical chromatographic methods.
Resumo:
We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input materialeither single texts or collections of textsand their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.
Resumo:
The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in Sao Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p < 0.05). The operators were mainly female and young (from 15 to 24 years old). The call center was opened 24 hours and the operators weekly hours were 36 hours with break time from 21 to 35 minutes per day. The symptoms reported were eye fatigue (73.9%), "weight" in the eyes (68.2%), "burning" eyes (54.6%), tearing (43.9%) and weakening of vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.
Resumo:
Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
There is a continuous search for theoretical methods that are able to describe the effects of the liquid environment on molecular systems. Different methods emphasize different aspects, and the treatment of both the local and bulk properties is still a great challenge. In this work, the electronic properties of a water molecule in liquid environment is studied by performing a relaxation of the geometry and electronic distribution using the free energy gradient method. This is made using a series of steps in each of which we run a purely molecular mechanical (MM) Monte Carlo Metropolis simulation of liquid water and subsequently perform a quantum mechanical/molecular mechanical (QM/MM) calculation of the ensemble averages of the charge distribution, atomic forces, and second derivatives. The MP2/aug-cc-pV5Z level is used to describe the electronic properties of the QM water. B3LYP with specially designed basis functions are used for the magnetic properties. Very good agreement is found for the local properties of water, such as geometry, vibrational frequencies, dipole moment, dipole polarizability, chemical shift, and spin-spin coupling constants. The very good performance of the free energy method combined with a QM/MM approach along with the possible limitations are briefly discussed.
Resumo:
A thorough search of the sky exposed at the Pierre Auger Cosmic Ray Observatory reveals no statistically significant excess of events in any small solid angle that would be indicative of a flux of neutral particles from a discrete source. The search covers from -90 degrees to +15 degrees in declination using four different energy ranges above 1 EeV (10(18) eV). The method used in this search is more sensitive to neutrons than to photons. The upper limit on a neutron flux is derived for a dense grid of directions for each of the four energy ranges. These results constrain scenarios for the production of ultrahigh energy cosmic rays in the Galaxy.
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
Background: This study measured grating visual acuity in 173 children between 6-48 months of age who had different types of spastic cerebral palsy (CP). Method: Behavioural acuity was measured with the Teller Acuity Cards (TAC) using a staircase psychophysical procedure. Electrophysiological visual acuity was estimated using the sweep VEP (sVEP). Results: The percentage of children outside the superior tolerance limits was 44 of 63 (69%) and 50 of 55 (91%) of tetraplegic, 36 of 56 (64%) and 42 of 53 (79%) of diplegic, 10 of 48 (21%) and 12 of 40 (30%) of hemiplegic for sVEP and TAC, respectively. For the sVEP, the greater visual acuity deficit found in the tetraplegic group was significantly different from that of the hemiplegic group (p < 0.001). In the TAC procedure the mean visual acuity deficits of the tetraplegic and diplegic groups were significantly different from that of hemiplegic group (p < 0.001). The differences between sVEP and TAC means of visual acuity difference were statistically significant for the tetraplegic (p < 0.001), diplegic (p < 0.001), and hemiplegic group (p = 0.004). Discussion: Better visual acuities were obtained in both procedures for hemiplegic children compared to diplegic or tetraplegic. Tetraplegic and diplegic children showed greater discrepancies between the TAC and sVEP results. Inter-ocular acuity difference was more frequent in sVEP measurements. Conclusions: Electrophysiologically measured visual acuity is better than behavioural visual acuity in children with CP.
Resumo:
This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.