955 resultados para visual methods
Resumo:
Companies require information in order to gain an improved understanding of their customers. Data concerning customers, their interests and behavior are collected through different loyalty programs. The amount of data stored in company data bases has increased exponentially over the years and become difficult to handle. This research area is the subject of much current interest, not only in academia but also in practice, as is shown by several magazines and blogs that are covering topics on how to get to know your customers, Big Data, information visualization, and data warehousing. In this Ph.D. thesis, the Self-Organizing Map and two extensions of it – the Weighted Self-Organizing Map (WSOM) and the Self-Organizing Time Map (SOTM) – are used as data mining methods for extracting information from large amounts of customer data. The thesis focuses on how data mining methods can be used to model and analyze customer data in order to gain an overview of the customer base, as well as, for analyzing niche-markets. The thesis uses real world customer data to create models for customer profiling. Evaluation of the built models is performed by CRM experts from the retailing industry. The experts considered the information gained with help of the models to be valuable and useful for decision making and for making strategic planning for the future.
Resumo:
This study compared the effectiveness of the multifocal visual evoked cortical potentials (mfVEP) elicited by pattern pulse stimulation with that of pattern reversal in producing reliable responses (signal-to-noise ratio >1.359). Participants were 14 healthy subjects. Visual stimulation was obtained using a 60-sector dartboard display consisting of 6 concentric rings presented in either pulse or reversal mode. Each sector, consisting of 16 checks at 99% Michelson contrast and 80 cd/m² mean luminance, was controlled by a binary m-sequence in the time domain. The signal-to-noise ratio was generally larger in the pattern reversal than in the pattern pulse mode. The number of reliable responses was similar in the central sectors for the two stimulation modes. At the periphery, pattern reversal showed a larger number of reliable responses. Pattern pulse stimuli performed similarly to pattern reversal stimuli to generate reliable waveforms in R1 and R2. The advantage of using both protocols to study mfVEP responses is their complementarity: in some patients, reliable waveforms in specific sectors may be obtained with only one of the two methods. The joint analysis of pattern reversal and pattern pulse stimuli increased the rate of reliability for central sectors by 7.14% in R1, 5.35% in R2, 4.76% in R3, 3.57% in R4, 2.97% in R5, and 1.78% in R6. From R1 to R4 the reliability to generate mfVEPs was above 70% when using both protocols. Thus, for a very high reliability and thorough examination of visual performance, it is recommended to use both stimulation protocols.
Resumo:
The purpose of the present study was to measure contrast sensitivity to equiluminant gratings using steady-state visual evoked cortical potential (ssVECP) and psychophysics. Six healthy volunteers were evaluated with ssVECPs and psychophysics. The visual stimuli were red-green or blue-yellow horizontal sinusoidal gratings, 5° × 5°, 34.3 cd/m2 mean luminance, presented at 6 Hz. Eight spatial frequencies from 0.2 to 8 cpd were used, each presented at 8 contrast levels. Contrast threshold was obtained by extrapolating second harmonic amplitude values to zero. Psychophysical contrast thresholds were measured using stimuli at 6 Hz and static presentation. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. ssVECP and both psychophysical contrast sensitivity functions (CSFs) were low-pass functions for red-green gratings. For electrophysiology, the highest contrast sensitivity values were found at 0.4 cpd (1.95 ± 0.15). ssVECP CSF was similar to dynamic psychophysical CSF, while static CSF had higher values ranging from 0.4 to 6 cpd (P < 0.05, ANOVA). Blue-yellow chromatic functions showed no specific tuning shape; however, at high spatial frequencies the evoked potentials showed higher contrast sensitivity than the psychophysical methods (P < 0.05, ANOVA). Evoked potentials can be used reliably to evaluate chromatic red-green CSFs in agreement with psychophysical thresholds, mainly if the same temporal properties are applied to the stimulus. For blue-yellow CSF, correlation between electrophysiology and psychophysics was poor at high spatial frequency, possibly due to a greater effect of chromatic aberration on this kind of stimulus.
Resumo:
The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc), water activity (Aw), and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg) at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.
Resumo:
Kandidaatintyö tehtiin osana PulpVision-tutkimusprojektia, jonka tarkoituksena on kehittää kuvapohjaisia laskenta- ja luokittelumetodeja sellun laaduntarkkailuun paperin valmistuksessa. Tämän tutkimusprojektin osana on aiemmin kehitetty metodi, jolla etsittiin kaarevia rakenteita kuvista, ja tätä metodia hyödynnettiin kuitujen etsintään kuvista. Tätä metodia käytettiin lähtökohtana kandidaatintyölle. Työn tarkoituksena oli tutkia, voidaanko erilaisista kuitukuvista laskettujen piirteiden avulla tunnistaa kuvassa olevien kuitujen laji. Näissä kuitukuvissa oli kuituja neljästä eri puulajista ja yhdestä kasvista. Nämä lajit olivat akasia, koivu, mänty, eukalyptus ja vehnä. Jokaisesta lajista valittiin 100 kuitukuvaa ja nämä kuvat jaettiin kahteen ryhmään, joista ensimmäistä käytettiin opetusryhmänä ja toista testausryhmänä. Opetusryhmän avulla jokaiselle kuitulajille laskettiin näitä kuvaavia piirteitä, joiden avulla pyrittiin tunnistamaan testausryhmän kuvissa olevat kuitulajit. Nämä kuvat oli tuottanut CEMIS-Oulu (Center for Measurement and Information Systems), joka on mittaustekniikkaan keskittynyt yksikkö Oulun yliopistossa. Yksittäiselle opetusryhmän kuitukuvalle laskettiin keskiarvot ja keskihajonnat kolmesta eri piirteestä, jotka olivat pituus, leveys ja kaarevuus. Lisäksi laskettiin, kuinka monta kuitua kuvasta löydettiin. Näiden piirteiden eri yhdistelmien avulla testattiin tunnistamisen tarkkuutta käyttämällä k:n lähimmän naapurin menetelmää ja Naiivi Bayes -luokitinta testausryhmän kuville. Testeistä saatiin lupaavia tuloksia muun muassa pituuden ja leveyden keskiarvoja käytettäessä saavutettiin jopa noin 98 %:n tarkkuus molemmilla algoritmeilla. Tunnistuksessa kuitujen keskimäärinen pituus vaikutti olevan kuitukuvia parhaiten kuvaava piirre. Käytettyjen algoritmien välillä ei ollut suurta vaihtelua tarkkuudessa. Testeissä saatujen tulosten perusteella voidaan todeta, että kuitukuvien tunnistaminen on mahdollista. Testien perusteella kuitukuvista tarvitsee laskea vain kaksi piirrettä, joilla kuidut voidaan tunnistaa tarkasti. Käytetyt lajittelualgoritmit olivat hyvin yksinkertaisia, mutta ne toimivat testeissä hyvin.
Resumo:
Convolutional Neural Networks (CNN) have become the state-of-the-art methods on many large scale visual recognition tasks. For a lot of practical applications, CNN architectures have a restrictive requirement: A huge amount of labeled data are needed for training. The idea of generative pretraining is to obtain initial weights of the network by training the network in a completely unsupervised way and then fine-tune the weights for the task at hand using supervised learning. In this thesis, a general introduction to Deep Neural Networks and algorithms are given and these methods are applied to classification tasks of handwritten digits and natural images for developing unsupervised feature learning. The goal of this thesis is to find out if the effect of pretraining is damped by recent practical advances in optimization and regularization of CNN. The experimental results show that pretraining is still a substantial regularizer, however, not a necessary step in training Convolutional Neural Networks with rectified activations. On handwritten digits, the proposed pretraining model achieved a classification accuracy comparable to the state-of-the-art methods.
Resumo:
Behavioral researchers commonly use single subject designs to evaluate the effects of a given treatment. Several different methods of data analysis are used, each with their own set of methodological strengths and limitations. Visual inspection is commonly used as a method of analyzing data which assesses the variability, level, and trend both within and between conditions (Cooper, Heron, & Heward, 2007). In an attempt to quantify treatment outcomes, researchers developed two methods for analysing data called Percentage of Non-overlapping Data Points (PND) and Percentage of Data Points Exceeding the Median (PEM). The purpose of the present study is to compare and contrast the use of Hierarchical Linear Modelling (HLM), PND and PEM in single subject research. The present study used 39 behaviours, across 17 participants to compare treatment outcomes of a group cognitive behavioural therapy program, using PND, PEM, and HLM on three response classes of Obsessive Compulsive Behaviour in children with Autism Spectrum Disorder. Findings suggest that PEM and HLM complement each other and both add invaluable information to the overall treatment results. Future research should consider using both PEM and HLM when analysing single subject designs, specifically grouped data with variability.
Resumo:
DNA sequence representation methods are used to denote a gene structure effectively and help in similarities/dissimilarities analysis of coding sequences. Many different kinds of representations have been proposed in the literature. They can be broadly classified into Numerical, Graphical, Geometrical and Hybrid representation methods. DNA structure and function analysis are made easy with graphical and geometrical representation methods since it gives visual representation of a DNA structure. In numerical method, numerical values are assigned to a sequence and digital signal processing methods are used to analyze the sequence. Hybrid approaches are also reported in the literature to analyze DNA sequences. This paper reviews the latest developments in DNA Sequence representation methods. We also present a taxonomy of various methods. A comparison of these methods where ever possible is also done
Resumo:
Presentation at the 1997 Dagstuhl Seminar "Evaluation of Multimedia Information Retrieval", Norbert Fuhr, Keith van Rijsbergen, Alan F. Smeaton (eds.), Dagstuhl Seminar Report 175, 14.04. - 18.04.97 (9716). - Abstract: This presentation will introduce ESCHER, a database editor which supports visualization in non-standard applications in engineering, science, tourism and the entertainment industry. It was originally based on the extended nested relational data model and is currently extended to include object-relational properties like inheritance, object types, integrity constraints and methods. It serves as a research platform into areas such as multimedia and visual information systems, QBE-like queries, computer-supported concurrent work (CSCW) and novel storage techniques. In its role as a Visual Information System, a database editor must support browsing and navigation. ESCHER provides this access to data by means of so called fingers. They generalize the cursor paradigm in graphical and text editors. On the graphical display, a finger is reflected by a colored area which corresponds to the object a finger is currently pointing at. In a table more than one finger may point to objects, one of which is the active finger and is used for navigating through the table. The talk will mostly concentrate on giving examples for this type of navigation and will discuss some of the architectural needs for fast object traversal and display. ESCHER is available as public domain software from our ftp site in Kassel. The portable C source can be easily compiled for any machine running UNIX and OSF/Motif, in particular our working environments IBM RS/6000 and Intel-based LINUX systems. A porting to Tcl/Tk is under way.
Resumo:
Stimuli outside classical receptive fields have been shown to exert significant influence over the activities of neurons in primary visual cortexWe propose that contextual influences are used for pre-attentive visual segmentation, in a new framework called segmentation without classification. This means that segmentation of an image into regions occurs without classification of features within a region or comparison of features between regions. This segmentation framework is simpler than previous computational approaches, making it implementable by V1 mechanisms, though higher leve l visual mechanisms are needed to refine its output. However, it easily handles a class of segmentation problems that are tricky in conventional methods. The cortex computes global region boundaries by detecting the breakdown of homogeneity or translation invariance in the input, using local intra-cortical interactions mediated by the horizontal connections. The difference between contextual influences near and far from region boundaries makes neural activities near region boundaries higher than elsewhere, making boundaries more salient for perceptual pop-out. This proposal is implemented in a biologically based model of V1, and demonstrated using examples of texture segmentation and figure-ground segregation. The model performs segmentation in exactly the same neural circuit that solves the dual problem of the enhancement of contours, as is suggested by experimental observations. Its behavior is compared with psychophysical and physiological data on segmentation, contour enhancement, and contextual influences. We discuss the implications of segmentation without classification and the predictions of our V1 model, and relate it to other phenomena such as asymmetry in visual search.
Resumo:
We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing one or more parameters in their definition. Methods that can be linked in this way are correspondence analysis, unweighted or weighted logratio analysis (the latter also known as "spectral mapping"), nonsymmetric correspondence analysis, principal component analysis (with and without logarithmic transformation of the data) and multidimensional scaling. In this presentation I will show how several of these methods, which are frequently used in compositional data analysis, may be linked through parametrizations such as power transformations, linear transformations and convex linear combinations. Since the methods of interest here all lead to visual maps of data, a "movie" can be made where where the linking parameter is allowed to vary in small steps: the results are recalculated "frame by frame" and one can see the smooth change from one method to another. Several of these "movies" will be shown, giving a deeper insight into the similarities and differences between these methods
Resumo:
Objetivos: Comparar el volumen macular total (VMT) y espesor retiniano, la agudeza visual mejor corregida (AVCC) y la incidencia de edema macular post cirugía de catarata, en ojos tratados con nepafenac 0.1%, dexametasona 0.1% o la combinación de los dos. Métodos: En una estudio prospectivo, aleatorizado, controlado y ciego para el observador, se aleatorizaron 147 ojos con catarata para recibir dexametasona 0.1% cada 6 horas por 12 días de forma postoperatoria, nepafenac 0.1% cada 8 horas antes de cirugía y por 15 días postoperatorios o la combinación de los dos. Los desenlaces medidos fueron presión intraocular a la primera y sexta semana, AVCC, VMT (medido por OCT), espesor foveal y perifoveal, y la presencia o ausencia de edema macular (espesor foveal > 240 micras, medido por OCT) a las 6 semanas. Resultados: En la primera semana, la presión intraocular fue significativamente mayor en el Grupo de dexametasona (p<0.05), y luego fue similar entre grupos a la sexta semana. No se presentaron diferencias significativas en agudeza visual. El espesor perifoveal fue significativamente menor en el grupo de nepafenac (p<0.05). La incidencia de edema macular medido por OCT fue de 7.5% (11 ojos); de estos 5 pertenecían al grupo dexametasona, tres al Grupo de Nepafenac y 3 ojos al Grupo de la combinación (p>0.05) Conclusión:. Los resultados demuestran que el nepafenac es por lo menos igual de efectivo para evitar la inflamación postoperatoria al compararlo con dexametasona, y adicionalmente podría ser útil para evitar el edema macular post cirugía de catarata.
Resumo:
Presentamos una investigación basada en los métodos de investigación visual, sobre la construcción de la identidad política de los manifestantes en el día del trabajo en Bogotá, Colombia, entre los años 2006 y 2010, desde una perspectiva del interaccionismo simbólico.
Resumo:
Objetivo: determinar los diferentes factores clínicos y de imagen pronósticos para la agudeza visual final a los 3, 6 y 12 meses en los pacientes con oclusiones venosas retinianas tratados con terapia antiangiogénica. Material y métodos: estudio longitudinal de una cohorte de 60 pacientes con oclusión venosa retiniana tratados con terapia antiangiogénica intravitrea con ranibizumab y bevacizumab tomados retrospectivamente entre 2010 y 2012. Posteriormente se realizó un estudio analítico de cohorte retrospectiva, para establecer la asociación y predicción de los cambios de la agudeza visual. Resultados: se encontró una diferencia estadísticamente significativa entre la línea base, 3, 6 y 12 meses post tratamiento. (p < 0.001, prueba de Friedman). En el modelo explicativo de los cambios de la agudeza visual a los 3, 6 y 12 meses con la línea base, se encontró asociación significativa la presencia de IS/OS a los 3 meses con los cambios de la agudeza visual a los 6 meses (r2=.232, p< 0.001) y a los 12 meses (r2=.506, p< 0.001), en este último también se encontró asociado el género masculino (r2=.277 p< 0.001). Tomando los valores de la agudeza visual a los 3,6 y 12 meses y como covariable la agudeza visual inicial se encontró asociación significativa en los tres momentos, 3 meses (r2=0.697, p< 0.001), 6 meses (r2=745, p< 0.001) y 12 meses y (r2=786, p< 0.001), se encontró asociación significativa el presentar oclusión venosa de rama comparada con oclusión central con los cambios de la agudeza visual a los 6 meses (r2=.662, p=0.04) Conclusión: la presencia de IS/OS a los 3 meses y la agudeza visual inicial tienen una alta capacidad predictora de la agudeza visual a los 6 y 12 meses, el género masculino tiene relación con mejoría de la agudeza visual final y la oclusión de rama tiene relación con la mejoría de la agudeza visual a 6 meses.