983 resultados para Software clones Detection
Resumo:
This work involved the development of a smart system dedicated to surface burning detection in the grinding process through constant monitoring of the process by acoustic emission and electrical power signals. A program in Visual Basic® for Windows® was developed, which collects the signals through an analog-digital converter and further processes them using burning detection algorithms already known. Three other parameters are proposed here and a comparative study carried out. When burning occurs, the newly developed software program sends a control signal warning the operator or interrupting the process, and delivers process information via the Internet. Parallel to this, the user can also interfere in the process via Internet, changing parameters and/or monitoring the grinding process. The findings of a comparative study of the various parameters are also discussed here. Copyright © 2006 by ABCM.
Resumo:
Nowadays there is great interest in structural damage detection in systems using nondestructive tests. Once the failure is detected, as for instance a crack, it is possible to take providences. There are several different approaches that can be used to obtain information about the existence, location and extension of the fault in the system by non-destructive tests. Among these methodologies, one can mention different optimization techniques, as for instance classical methods, genetic algorithms, neural networks, etc. Most of these techniques, which are based on element-byelement adjustments of a finite element (FE) model, take advantage of the dynamic behavior of the model. However, in practical situations, usually, is almost impossible to obtain an accuracy model. In this paper, it is proposed an experimental technique for damage location. This technique is based on H: norm to obtain the damage location. The dynamic properties of the structure were identified using experimental data by eigensystem realization algorithm (ERA). The experimental test was carried out in a beam structure through varying the mass of an element. For the output signal was used a piezoelectric sensor. The signal of input of sine form was generated through SignalCalc® software.
Resumo:
Oxacillin is an alternative for the treatment of Staphylococcus spp. infections; however, resistance to this drug has become a major problem over recent decades. The main objective of this study was to epidemiologically characterize coagulase-negative staphylococci (CoNS) strains recovered from blood of patients hospitalized in a Brazilian teaching hospital. Oxacillin resistance was analyzed in 160 strains isolated from blood culture samples by phenotypic methods, detection of the mecA gene, and determination of intermediate sensitivity to vancomycin on brain heart infusion agar supplemented with 4 and 6 μg/mL vancomycin. In addition, characterization of the epidemiological profile by staphylococcal cassette chromosome mec (SCC. mec) typing and clonal analysis by pulsed-field gel electrophoresis (PFGE) were performed. The mecA gene was detected in 72.5% of the isolates. Methicillin-resistant CoNS isolates exhibited the highest minimum inhibitory concentrations and multiresistance when compared to methicillin-susceptible CoNS strains. Typing classified 32.8% of the isolates as SCC. mec I and 50% as SCC. mec III. PFGE typing of the SCC. mec III Staphylococcus epidermidis isolates identified 6 clones disseminated in different wards that persisted from 2002 to 2009. The high oxacillin resistance rates found in this study and clonal dissemination in different wards highlight the importance of good practices in nosocomial infection control and of the rational use of antibiotic therapy in order to prevent the dissemination of these clones. © 2013 Elsevier Inc.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Os Sistemas de Detecção e Prevenção de Intrusão (Intrusion Detection Systems – IDS e Intrusion Prevention Systems - IPS) são ferramentas bastante conhecidas e bem consagradas no mundo da segurança da informação. Porém, a falta de integração com os equipamentos de rede como switches e roteadores acaba limitando a atuação destas ferramentas e exige um bom dimensionamento de recursos de hardware como processamento, memória e interfaces de rede de alta velocidade, utilizados para implementá-las. Diante de diversas limitações deparadas por pesquisadores e administradores de redes, surgiu o conceito de Rede Definida por Software (Software Defined Network – SDN), que ao separar os planos de controle e de dados, permite adaptar o funcionamento da rede de acordo com as necessidades de cada um. Desta forma, devido à padronização e flexibilidade propostas pelas SDNs, e das limitações apresentadas dos IPSs, esta dissertação de mestrado propõe o IPSFlow, um framework que utiliza uma rede baseada na arquitetura SDN e o protocolo OpenFlow para a criação de um IPS com ampla cobertura e que permite bloquear um tráfego caracterizado pelos IDS(s) como malicioso no equipamento mais próximo da origem. Para validar o framework, experimentos no ambiente virtual Mininet foram realizados utilizando-se o Snort como IDS para analisar tráfego de varredura (scan) gerado pelo Nmap de um host ao outro. Os resultados coletados apresentam que o IPSFlow funcionou conforme planejado ao efetuar o bloqueio de 85% do tráfego de varredura.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
Software product line (SPL) engineering offers several advantages in the development of families of software products such as reduced costs, high quality and a short time to market. A software product line is a set of software intensive systems, each of which shares a common core set of functionalities, but also differs from the other products through customization tailored to fit the needs of individual groups of customers. The differences between products within the family are well-understood and organized into a feature model that represents the variability of the SPL. Products can then be built by generating and composing features described in the feature model. Testing of software product lines has become a bottleneck in the SPL development lifecycle, since many of the techniques used in their testing have been borrowed from traditional software testing and do not directly take advantage of the similarities between products. This limits the overall gains that can be achieved in SPL engineering. Recent work proposed by both industry and the research community for improving SPL testing has begun to consider this problem, but there is still a need for better testing techniques that are tailored to SPL development. In this thesis, I make two primary contributions to software product line testing. First I propose a new definition for testability of SPLs that is based on the ability to re-use test cases between products without a loss of fault detection effectiveness. I build on this idea to identify elements of the feature model that contribute positively and/or negatively towards SPL testability. Second, I provide a graph based testing approach called the FIG Basis Path method that selects products and features for testing based on a feature dependency graph. This method should increase our ability to re-use results of test cases across successive products in the family and reduce testing effort. I report the results of a case study involving several non-trivial SPLs and show that for these objects, the FIG Basis Path method is as effective as testing all products, but requires us to test no more than 24% of the products in the SPL.
Resumo:
Abstract Background The Brazilian population is mainly descendant from European colonizers, Africans and Native Americans. Some Afro-descendants lived in small isolated communities since the slavery period. The epidemiological status of HBV infection in Quilombos communities from northeast of Brazil remains unknown. The aim of this study was to characterize the HBV genotypes circulating inside a Quilombo isolated community from Maranhão State, Brazil. Methods Seventy-two samples from Frechal Quilombo community at Maranhão were collected. All serum samples were screened by enzyme-linked immunosorbent assays for the presence of hepatitis B surface antigen (HBsAg). HBsAg positive samples were submitted to DNA extraction and a fragment of 1306 bp partially comprising HBsAg and polymerase coding regions (S/POL) was amplified by nested PCR and its nucleotide sequence was determined. Viral isolates were genotyped by phylogenetic analysis using reference sequences from each genotype obtained from GenBank (n = 320). Sequences were aligned using Muscle software and edited in the SE-AL software. Bayesian phylogenetic analyses were conducted using Markov Chain Monte Carlo (MCMC) method to obtain the MCC tree using BEAST v.1.5.3. Results Of the 72 individuals, 9 (12.5%) were HBsAg-positive and 4 of them were successfully sequenced for the 1306 bp fragment. All these samples were genotype A1 and grouped together with other sequences reported from Brazil. Conclusions The present study represents the first report on the HBV genotypes characterization of this community in the Maranhão state in Brazil where a high HBsAg frequency was found. In this study, we reported a high frequency of HBV infection and the exclusive presence of subgenotype A1 in an Afro-descendent community in the Maranhão State, Brazil.
Resumo:
This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.
Resumo:
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
Resumo:
The objective of our study was to compare the effect of dual-energy subtraction and bone suppression software alone and in combination with computer-aided detection (CAD) on the performance of human observers in lung nodule detection.