904 resultados para Network Analysis Methods
Resumo:
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.
Resumo:
Introduction: The characterization of microbial communities infecting the endodontic system in each clinical condition may help on the establishment of a correct prognosis and distinct strategies of treatment. The purpose of this study was to determine the bacterial diversity in primary endodontic infections by 16S ribosomal-RNA (rRNA) sequence analysis. Methods: Samples from root canals of untreated asymptomatic teeth (n = 12) exhibiting periapical lesions were obtained, 165 rRNA bacterial genomic libraries were constructed and sequenced, and bacterial diversity was estimated. Results: A total of 489 clones were analyzed (mean, 40.7 +/- 8.0 clones per sample). Seventy phylotypes were identified of which six were novel phylotypes belonging to the family Ruminococcaceae. The mean number of taxa per canal was 10.0, ranging from 3 to 21 per sample; 65.7% of the cloned sequences represented phylotypes for which no cultivated isolates have been reported. The most prevalent taxa were Atopobium rimae (50.0%), Dialister invisus, Pre-votella oris, Pseudoramibacter alactolyticus, and Tannerella forsythia (33.3%). Conclusions: Although several key species predominate in endodontic samples of asymptomatic cases with periapical lesions, the primary endodontic infection is characterized by a wide bacterial diversity, which is mostly represented by members of the phylum Firmicutes belonging to the class Clostridia followed by the phylum Bacteroidetes. (J Ended 2011;37:922-926)
Resumo:
Sociable robots are embodied agents that are part of a heterogeneous society of robots and humans. They Should be able to recognize human beings and each other, and to engage in social, interactions. The use of a robotic architecture may strongly reduce the time and effort required to construct a sociable robot. Such architecture must have structures and mechanisms to allow social interaction. behavior control and learning from environment. Learning processes described oil Science of Behavior Analysis may lead to the development of promising methods and Structures for constructing robots able to behave socially and learn through interactions from the environment by a process of contingency learning. In this paper, we present a robotic architecture inspired from Behavior Analysis. Methods and structures of the proposed architecture, including a hybrid knowledge representation. are presented and discussed. The architecture has been evaluated in the context of a nontrivial real problem: the learning of the shared attention, employing an interactive robotic head. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human and the environment. The obtained results show that the robotic architecture is able to produce appropriate behavior and to learn from social interaction. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In a recent paper, the hydrodynamic code NEXSPheRIO was used in conjunction with STAR analysis methods to study two-particle correlations as a function of Delta(eta) and Delta phi. The various structures observed in the data were reproduced. In this work, we discuss the origin of these structures as well as present new results.
Resumo:
Estimating the sizes of hard-to-count populations is a challenging and important problem that occurs frequently in social science, public health, and public policy. This problem is particularly pressing in HIV/AIDS research because estimates of the sizes of the most at-risk populations-illicit drug users, men who have sex with men, and sex workers-are needed for designing, evaluating, and funding programs to curb the spread of the disease. A promising new approach in this area is the network scale-up method, which uses information about the personal networks of respondents to make population size estimates. However, if the target population has low social visibility, as is likely to be the case in HIV/AIDS research, scale-up estimates will be too low. In this paper we develop a game-like activity that we call the game of contacts in order to estimate the social visibility of groups, and report results from a study of heavy drug users in Curitiba, Brazil (n = 294). The game produced estimates of social visibility that were consistent with qualitative expectations but of surprising magnitude. Further, a number of checks suggest that the data are high-quality. While motivated by the specific problem of population size estimation, our method could be used by researchers more broadly and adds to long-standing efforts to combine the richness of social network analysis with the power and scale of sample surveys. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Nowadays, noninvasive methods of diagnosis have increased due to demands of the population that requires fast, simple and painless exams. These methods have become possible because of the growth of technology that provides the necessary means of collecting and processing signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this paper is to characterize healthy and pathological voice signals with the aid of relative entropy measures. Phase space reconstruction technique is also used as a way to select interesting regions of the signals. Three groups of samples were used, one from healthy individuals and the other two from people with nodule in the vocal fold and Reinke`s edema. All of them are recordings of sustained vowel /a/ from Brazilian Portuguese. The paper shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Relative entropy is well suited due to its sensibility to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. The results showed that the pathological groups had higher entropy values in accordance with other vocal acoustic parameters presented. This suggests that these techniques may improve and complement the recent voice analysis methods available for clinicians. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
This work describes a novel methodology for automatic contour extraction from 2D images of 3D neurons (e.g. camera lucida images and other types of 2D microscopy). Most contour-based shape analysis methods cannot be used to characterize such cells because of overlaps between neuronal processes. The proposed framework is specifically aimed at the problem of contour following even in presence of multiple overlaps. First, the input image is preprocessed in order to obtain an 8-connected skeleton with one-pixel-wide branches, as well as a set of critical regions (i.e., bifurcations and crossings). Next, for each subtree, the tracking stage iteratively labels all valid pixel of branches, tip to a critical region, where it determines the suitable direction to proceed. Finally, the labeled skeleton segments are followed in order to yield the parametric contour of the neuronal shape under analysis. The reported system was successfully tested with respect to several images and the results from a set of three neuron images are presented here, each pertaining to a different class, i.e. alpha, delta and epsilon ganglion cells, containing a total of 34 crossings. The algorithms successfully got across all these overlaps. The method has also been found to exhibit robustness even for images with close parallel segments. The proposed method is robust and may be implemented in an efficient manner. The introduction of this approach should pave the way for more systematic application of contour-based shape analysis methods in neuronal morphology. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Two-dimensional and 3D quantitative structure-activity relationships studies were performed on a series of diarylpyridines that acts as cannabinoid receptor ligands by means of hologram quantitative structure-activity relationships and comparative molecular field analysis methods. The quantitative structure-activity relationships models were built using a data set of 52 CB1 ligands that can be used as anti-obesity agents. Significant correlation coefficients (hologram quantitative structure-activity relationships: r 2 = 0.91, q 2 = 0.78; comparative molecular field analysis: r 2 = 0.98, q 2 = 0.77) were obtained, indicating the potential of these 2D and 3D models for untested compounds. The models were then used to predict the potency of an external test set, and the predicted (calculated) values are in good agreement with the experimental results. The final quantitative structure-activity relationships models, along with the information obtained from 2D contribution maps and 3D contour maps, obtained in this study are useful tools for the design of novel CB1 ligands with improved anti-obesity potency.
Resumo:
The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.
Resumo:
Objective: To investigate whether spirography-based objective measures are able to effectively characterize the severity of unwanted symptom states (Off and dyskinesia) and discriminate them from motor state of healthy elderly subjects. Background: Sixty-five patients with advanced Parkinson’s disease (PD) and 10 healthy elderly (HE) subjects performed repeated assessments of spirography, using a touch screen telemetry device in their home environments. On inclusion, the patients were either treated with levodopa-carbidopa intestinal gel or were candidates for switching to this treatment. On each test occasion, the subjects were asked trace a pre-drawn Archimedes spiral shown on the screen, using an ergonomic pen stylus. The test was repeated three times and was performed using dominant hand. A clinician used a web interface which animated the spiral drawings, allowing him to observe different kinematic features, like accelerations and spatial changes, during the drawing process and to rate different motor impairments. Initially, the motor impairments of drawing speed, irregularity and hesitation were rated on a 0 (normal) to 4 (extremely severe) scales followed by marking the momentary motor state of the patient into 2 categories that is Off and Dyskinesia. A sample of spirals drawn by HE subjects was randomly selected and used in subsequent analysis. Methods: The raw spiral data, consisting of stylus position and timestamp, were processed using time series analysis techniques like discrete wavelet transform, approximate entropy and dynamic time warping in order to extract 13 quantitative measures for representing meaningful motor impairment information. A principal component analysis (PCA) was used to reduce the dimensions of the quantitative measures into 4 principal components (PC). In order to classify the motor states into 3 categories that is Off, HE and dyskinesia, a logistic regression model was used as a classifier to map the 4 PCs to the corresponding clinically assigned motor state categories. A stratified 10-fold cross-validation (also known as rotation estimation) was applied to assess the generalization ability of the logistic regression classifier to future independent data sets. To investigate mean differences of the 4 PCs across the three categories, a one-way ANOVA test followed by Tukey multiple comparisons was used. Results: The agreements between computed and clinician ratings were very good with a weighted area under the receiver operating characteristic curve (AUC) coefficient of 0.91. The mean PC scores were different across the three motor state categories, only at different levels. The first 2 PCs were good at discriminating between the motor states whereas the PC3 was good at discriminating between HE subjects and PD patients. The mean scores of PC4 showed a trend across the three states but without significant differences. The Spearman’s rank correlations between the first 2 PCs and clinically assessed motor impairments were as follows: drawing speed (PC1, 0.34; PC2, 0.83), irregularity (PC1, 0.17; PC2, 0.17), and hesitation (PC1, 0.27; PC2, 0.77). Conclusions: These findings suggest that spirography-based objective measures are valid measures of spatial- and time-dependent deficits and can be used to distinguish drug-related motor dysfunctions between Off and dyskinesia in PD. These measures can be potentially useful during clinical evaluation of individualized drug-related complications such as over- and under-medications thus maximizing the amount of time the patients spend in the On state.
Resumo:
Tendo como motivação o desenvolvimento de uma representação gráfica de redes com grande número de vértices, útil para aplicações de filtro colaborativo, este trabalho propõe a utilização de superfícies de coesão sobre uma base temática multidimensionalmente escalonada. Para isso, utiliza uma combinação de escalonamento multidimensional clássico e análise de procrustes, em algoritmo iterativo que encaminha soluções parciais, depois combinadas numa solução global. Aplicado a um exemplo de transações de empréstimo de livros pela Biblioteca Karl A. Boedecker, o algoritmo proposto produz saídas interpretáveis e coerentes tematicamente, e apresenta um stress menor que a solução por escalonamento clássico.
Resumo:
A estratégia empresarial é uma disciplina jovem. Comparado com os campos de estudo de economia e sociologia o campo de estratégia empresarial pode ser visto como um fenômeno de formação mais recente, embora extremamente dinâmico em sua capacidade de criar abordagens teóricas diferenciadas. Este trabalho discute a recente proliferação de teorias em estratégia empresarial, propondo um modelo de classificação destas teorias baseado na análise empírica do modelo de escolas de pensamento em estratégia empresarial desenvolvido por Mintzberg, Ahlstrand e Lampel em seu livro Safári de Estratégia (1998). As possíveis conseqüências relativas à interação entre teoria e prática são também discutidas apresentando o que definimos como a síndrome do ornitorrinco.
Resumo:
Sistemas de recomendação baseados cooperação indireta podem ser implementados em bibliotecas por meio da aplicação de conceitos e procedimentos de análise de redes. Uma medida de distância temática, inicialmente desenvolvida para variáveis dicotômicas, foi generalizada e aplicada a matrizes de co-ocorrências, permitindo o aproveitando de toda a informação disponível sobre o comportamento dos usuários com relação aos itens consultados. Como resultado formaram-se subgrupos especializados altamente coerentes, para os quais listas-base e listas personalizadas foram geradas da maneira usual. Aplicativos programáveis capazes de manipularem matrizes, como o software S-plus, foram utilizados para os cálculos (com vantagens sobre o software especializado UCINET 5.0), sendo suficientes para o processamento de grupos temáticos de até 10.000 usuários.
Resumo:
Evolution is present in world dynamics. And it is just in such transformational environment where companies have been encapsulated. In an economy of knowledge, physical assets alone are unable to provide profits to meet shareholders' demands. Now there comes an invisible component with the purpose of defining strategies and impelling results: Intangible Assets. Banking financing systems, however, have not kept pace with this knowledge revolution and its resulting new income generation techniques. Credit analysis methods for most financing agents would not employ any intangible parameters in their methodology of study as yet. This paper seeks to discuss the importance of intangible assets by focusing their role of influencial factor in decisions to finance technology-based companies. By studying the credit risk classification system employed by FINEP, Brazil's Federal Agency for innovation development, we wished to suggest indicators for intangibles which might be put to use in the Financiadora.
Resumo:
Last week I sat down with a Brazilian acquaintance who was shaking his head over the state of national politics. A graduate of a military high school, he'd been getting e-mails from former classmates, many of them now retired army officers, who were irate over the recent presidential elections. "We need to kick these no-good Petistas out of office," one bristled, using the derogatory shorthand for members of the ruling Workers Party, or PT in Portuguese.