49 resultados para Detecção de bactéria
Resumo:
Oral squamous cell carcinoma (OSCC) is the most common malignancy in oral cavity and human papillomavirus (HPV) may have an important role in its development. The aim of this experiment was to investigate the HPV DNA and viral types in 90 cases of OSCC. Moreover, a comparative analysis between the cases of OSSC with and without HPV DNA was performed by using cell cycle markers p21 and pRb in order to detect a possible correlation of these proteins and HPV infection. DNA was extracted from paraffin embedded tissue and amplified by PCR (polymerase chain reaction) with primers PCO3+ e PCO4+ for a fragment of human β-globin gene. After this procedure, PCR for HPV DNA detection was realized using a pair of generic primers GP5+ e GP6+. Immunohistochemical study was performed by streptoavidin-biotin technique and antibodies against p21 and pRb proteins were employed. Eighty-eight cases were positive for human β-globin gene and HPV DNA was found in 26 (29.5%) of then. It could not be detected significant correlation between HPV and age, sex and anatomical sites of the lesion. The most prevalent viral type was HPV 18 (80.8%). Regarding the immunohistochemical analysis, it was detected significant association between HPV presence and pRb immunoexpression (p=0,044), nevertheless, the same was not observed in relation to p21 protein (p =0,416). It can be concluded that the low detection of HPV DNA in OSCC by the present experiment suggests a possible role of the virus in the development and progression in just a subset of this disease
Resumo:
Callithrix jacchus studies involving differences between the sexes regarding the performance on tasks food still offers room for the investigation of some factors, among them there is the differences in color vision, which can directly influence the detection of visual clues on food items. This study aimed to analyze the performance of C. jacchus in tasks involving detection of food items. Some factors were analyzed such as the differences in performance between the sexes and behavioral categories present during the task. There were no differences in performance between the animals in carrying out the task, for all situations presented, examining the behavioral categories observed. The fact of the task to be very simple might have influenced the results, and it was not possible to observe differences in performance. Males and females showed the same performance in all analyzed situations. The sex differences were not found possibly due to the influence of external factors, such as the structure of the experimental apparatus. The animals are more efficient in carrying out the task during the morning, in comparison to the afternoon. The light may have been one of the factors that influenced these results. Due to the influence of other factors that probably contributed to these results, we believe that different results can be found in future work
Resumo:
Among placental mammals, primates are the only ones to present trichromatic color vision. However, the distribution of trichromacy among primates is not homogeneous: Old World primates shows an uniform trichromacy (with all individuals being trichromats) and New World primates exhibit a color vision polymorphism (with dichromatic males and dichromatic or trichromatic females). Visual ecology studies have investigated which selective pressures may have been responsible for the evolution of trichromacy in primates, diverging from the dichromat standard found in other mammals. Cues associated with foraging and the socio-reproductive status were analyzed, indicating a trichromatic advantage for the rapid detection of visually conspicuous objects against a green background. However, dichromats are characterized by an efficient capture of cryptic and camouflaged stimuli. These advantages regarding phenotype may be responsible for the maintenance of the visual polymorphism in New World primates and for the high incidence of color blindness in humans (standing around 8% in Caucasian men). An important factor that has not yet been experimentally taken into account is the predation risk and its effect on the evolution of trichromacy in primates. To answer this question, we prepared and edited pictures of animals with different coats: oncillas (Leopardus spp.), puma (Puma concolor) and ferret (Galictis cuja). The specimens were taxidermized and the photographs were taken in three different vegetation scenarios (dense forest, cerrado and grassland). The images of the predators were manipulated so that they fit into two categories of stimulus size (small or large). After color calibration and photo editing, these were presented to 40 humans (20 dichromats and 20 trichromats) by a computer program, which presented a set of four photos at a time (one picture containing the taxidermized animal amid the background vegetation and three depicting only the background vegetation) and recorded the response latency and success rate of the subjects. The results show a trichromatic advantage in detecting potential predators. The predator detection was influenced by the background, the predator species, the dimension of the stimulus and the observer s visual phenotype. As humans have a high rate of dyschromatopsias, when compared to wild Catarrhini or human tribal populations, it is possible that the increased rate of dichromats is a result of reduced pressure for rapid predator detection. Since our species came to live in more cohesive groups and resistant to attack by predators, with the advent of agriculture and the formation of villages, it is possible that the lower risk of predation has reduced the selection in favor of trichromats
Resumo:
Soil contamination by pesticides is an environmental problem that needs to be monitored and avoided. However, the lack of fast, accurate and low cost analytical methods for discovering residual pesticide in complex matrices, such as soil, is a problem still unresolved. This problem needs to be solved before we are able to assess the quality of environmental samples. The intensive use of pesticides has increased since the 60s, because the dependence of their use, causing biological imbalances and promoting resistance and recurrence of high populations of pests and pathogens (upwelling). This has contributed to the appearance of new pests that were previously under natural control. To develop analytical methods that are able to quantify residues pesticide in complex environment. It is still a challenge for many laboratories. The integration of two analytical methods one ecotoxicological and another chemical demonstrates the potential for environmental analysis of methamidophos. The aim of this study was to evaluate an ecotoxicological method as "screening" analytical methamidophos in the soil and perform analytical confirmation in the samples of the concentration of the analyte by chemical method LC-MS/MS In this work we tested two soils: a clayey and sandy, both in contact with the kinetic methamidophos model followed pseudo-second order. The clay soil showed higher absorption of methamidophos and followed the Freundlich model, while the sandy, the Langmuir model. The chemical method was validated LC-MS/MS satisfactory, showing all parameters of linearity, range, precision, accuracy, and sensitivity adequate. In chronic ecotoxicological tests with C. dubia, the NOEC was 4.93 and 3.24 for ng L-1 of methamidophos to elutriate assays of sandy and clay soils, respectively. The method for ecotoxicological levels was more sensitive than LC-MS/MS detection of methamidophos, loamy and sandy soils. However, decreasing the concentration of the standard for analytical methamidophos and adjusting for the validation conditions chemical acquires a limit of quantification (LOQ) in ng L-1, consistent with the provisions of ecotoxicological test. The methods described should be used as an analytical tool for methamidophos in soil, and the ecotoxicological analysis can be used as a "screening" and LC-MS/MS as confirmatory analysis of the analyte molecule, confirming the objectives of this work
Resumo:
Web services are software units that allow access to one or more resources, supporting the deployment of business processes in the Web. They use well-defined interfaces, using web standard protocols, making possible the communication between entities implemented on different platforms. Due to these features, Web services can be integrated as services compositions to form more robust loose coupling applications. Web services are subject to failures, unwanted situations that may compromise the business process partially or completely. Failures can occur both in the design of compositions as in the execution of compositions. As a result, it is essential to create mechanisms to make the implementation of service compositions more robust and to treat failures. Specifically, we propose the support for fault recovery in service compositions described in PEWS language and executed on PEWS-AM, an graph reduction machine. To support recovery failure on PEWS-AM, we extend the PEWS language specification and adapted the rules of translation and reduction of graphs for this machine. These contributions were made both in the model of abstract machine as at the implementation level
Resumo:
In February 2011, the National Agency of Petroleum, Natural Gas and Biofuels (ANP) has published a new Technical Rules for Handling Land Pipeline Petroleum and Natural Gas Derivatives (RTDT). Among other things, the RTDT made compulsory the use of monitoring systems and leak detection in all onshore pipelines in the country. This document provides a study on the method for detection of transient pressure. The study was conducted on a industrial duct 16" diameter and 9.8 km long. The pipeline is fully pressurized and carries a multiphase mixture of crude oil, water and natural gas. For the study, was built an infrastructure for data acquisition and validation of detection algorithms. The system was designed with SCADA architecture. Piezoresistive sensors were installed at the ends of the duct and Digital Signal Processors (DSPs) were used for sampling, storage and processing of data. The study was based on simulations of leaks through valves and search for patterns that characterize the occurrence of such phenomena
Resumo:
This work presents contributions in the detection and identication of faults in multilevel inverters through the study of the converters behavior under these operation conditions. Basically, the approached fault consists of an open-circuit in any switch of a three-level clamped diode inverter. The converter operation is characterized in the pre and post-fault states. A wave form behavior analysis of the pole voltage, phase current and dc-bus current is also done, which highlights characteristics that allow the detection of failure and, even, under favorable conditions, the identication of the faulty device. A compensation strategy of the approached fault (open-switch) is also investigated with the purpose of maintaining the driving system operational when a failure occurs. The proposed topology uses SCRs in parallel with the internal switches of the inverter, which allows, in some occasions, the full utilization of the dc-bus
Resumo:
In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison
Resumo:
Objective to establish a methodology for the oil spill monitoring on the sea surface, located at the Submerged Exploration Area of the Polo Region of Guamaré, in the State of Rio Grande do Norte, using orbital images of Synthetic Aperture Radar (SAR integrated with meteoceanographycs products. This methodology was applied in the following stages: (1) the creation of a base map of the Exploration Area; (2) the processing of NOAA/AVHRR and ERS-2 images for generation of meteoceanographycs products; (3) the processing of RADARSAT-1 images for monitoring of oil spills; (4) the integration of RADARSAT-1 images with NOAA/AVHRR and ERS-2 image products; and (5) the structuring of a data base. The Integration of RADARSAT-1 image of the Potiguar Basin of day 21.05.99 with the base map of the Exploration Area of the Polo Region of Guamaré for the identification of the probable sources of the oil spots, was used successfully in the detention of the probable spot of oil detected next to the exit to the submarine emissary in the Exploration Area of the Polo Region of Guamaré. To support the integration of RADARSAT-1 images with NOAA/AVHRR and ERS-2 image products, a methodology was developed for the classification of oil spills identified by RADARSAT-1 images. For this, the following algorithms of classification not supervised were tested: K-means, Fuzzy k-means and Isodata. These algorithms are part of the PCI Geomatics software, which was used for the filtering of RADARSAT-1 images. For validation of the results, the oil spills submitted to the unsupervised classification were compared to the results of the Semivariogram Textural Classifier (STC). The mentioned classifier was developed especially for oil spill classification purposes and requires PCI software for the whole processing of RADARSAT-1 images. After all, the results of the classifications were analyzed through Visual Analysis; Calculation of Proportionality of Largeness and Analysis Statistics. Amongst the three algorithms of classifications tested, it was noted that there were no significant alterations in relation to the spills classified with the STC, in all of the analyses taken into consideration. Therefore, considering all the procedures, it has been shown that the described methodology can be successfully applied using the unsupervised classifiers tested, resulting in a decrease of time in the identification and classification processing of oil spills, if compared with the utilization of the STC classifier
Resumo:
This study includes the results of the analysis of areas susceptible to degradation by remote sensing in semi-arid region, which is a matter of concern and affects the whole population and the catalyst of this process occurs by the deforestation of the savanna and improper practices by the use of soil. The objective of this research is to use biophysical parameters of the MODIS / Terra and images TM/Landsat-5 to determine areas susceptible to degradation in semi-arid Paraiba. The study area is located in the central interior of Paraíba, in the sub-basin of the River Taperoá, with average annual rainfall below 400 mm and average annual temperature of 28 ° C. To draw up the map of vegetation were used TM/Landsat-5 images, specifically, the composition 5R4G3B colored, commonly used for mapping land use. This map was produced by unsupervised classification by maximum likelihood. The legend corresponds to the following targets: savanna vegetation sparse and dense, riparian vegetation and exposed soil. The biophysical parameters used in the MODIS were emissivity, albedo and vegetation index for NDVI (NDVI). The GIS computer programs used were Modis Reprojections Tools and System Information Processing Georeferenced (SPRING), which was set up and worked the bank of information from sensors MODIS and TM and ArcGIS software for making maps more customizable. Initially, we evaluated the behavior of the vegetation emissivity by adapting equation Bastiaanssen on NDVI for spatialize emissivity and observe changes during the year 2006. The albedo was used to view your percentage of increase in the periods December 2003 and 2004. The image sensor of Landsat TM were used for the month of December 2005, according to the availability of images and in periods of low emissivity. For these applications were made in language programs for GIS Algebraic Space (LEGAL), which is a routine programming SPRING, which allows you to perform various types of algebras of spatial data and maps. For the detection of areas susceptible to environmental degradation took into account the behavior of the emissivity of the savanna that showed seasonal coinciding with the rainy season, reaching a maximum emissivity in the months April to July and in the remaining months of a low emissivity . With the images of the albedo of December 2003 and 2004, it was verified the percentage increase, which allowed the generation of two distinct classes: areas with increased variation percentage of 1 to 11.6% and the percentage change in areas with less than 1 % albedo. It was then possible to generate the map of susceptibility to environmental degradation, with the intersection of the class of exposed soil with varying percentage of the albedo, resulting in classes susceptibility to environmental degradation
Resumo:
The detection and diagnosis of faults, ie., find out how , where and why failures occur is an important area of study since man came to be replaced by machines. However, no technique studied to date can solve definitively the problem. Differences in dynamic systems, whether linear, nonlinear, variant or invariant in time, with physical or analytical redundancy, hamper research in order to obtain a unique solution . In this paper, a technique for fault detection and diagnosis (FDD) will be presented in dynamic systems using state observers in conjunction with other tools in order to create a hybrid FDD. A modified state observer is used to create a residue that allows also the detection and diagnosis of faults. A bank of faults signatures will be created using statistical tools and finally an approach using mean squared error ( MSE ) will assist in the study of the behavior of fault diagnosis even in the presence of noise . This methodology is then applied to an educational plant with coupled tanks and other with industrial instrumentation to validate the system.
Resumo:
Recent studies have shown evidence of log-periodic behavior in non-hierarchical systems. An interesting fact is the emergence of such properties on rupture and breakdown of complex materials and financial failures. These may be examples of systems with self-organized criticality (SOC). In this work we study the detection of discrete scale invariance or log-periodicity. Theoretically showing the effectiveness of methods based on the Fourier Transform of the log-periodicity detection not only with prior knowledge of the critical point before this point as well. Specifically, we studied the Brazilian financial market with the objective of detecting discrete scale invariance in Bovespa (Bolsa de Valores de S˜ao Paulo) index. Some historical series were selected periods in 1999, 2001 and 2008. We report evidence for the detection of possible log-periodicity before breakage, shown its applicability to the study of systems with discrete scale invariance likely in the case of financial crashes, it shows an additional evidence of the possibility of forecasting breakage
Resumo:
This thesis is part of research on new materials for catalysis and gas sensors more active, sensitive, selective. The aim of this thesis was to develop and characterize cobalt ferrite in different morphologies, in order to study their influence on the electrical response and the catalytic activity, and to hierarchize these grains for greater diffusivity of gas in the material. The powders were produced via hydrothermal and solvothermal, and were characterized by thermogravimetric analysis, X-ray diffraction, scanning electron microscopy, transmission electron microscopy (electron diffraction, highresolution simulations), and energy dispersive spectroscopy. The catalytic and electrical properties were tested in the presence of CO and NO2 gases, the latter in different concentrations (1-100 ppm) and at different temperatures (room temperature to 350 ° C). Nanooctahedra with an average size of 20 nm were obtained by hydrothermal route. It has been determined that the shape of the grains is mainly linked to the nature of the precipitating agent and the presence of OH ions in the reaction medium. By solvothermal method CoFe2O4 spherical powders were prepared with grain size of 8 and 20 nm. CoFe2O4 powders exhibit a strong response to small amounts of NO2 (10 ppm to 200 ° C). The nanooctahedra have greater sensitivity than the spherical grains of the same size, and have smaller response time and shorter recovery times. These results were confirmed by modeling the kinetics of response and recovery of the sensor. Initial tests of catalytic activity in the oxidation of CO between temperatures of 100 °C and 350 °C show that the size effect is predominant in relation the effect of the form with respect to the conversion of the reaction. The morphology of the grains influence the rate of reaction. A higher reaction rate is obtained in the presence of nanooctahedra. In order to improve the detection and catalytic properties of the material, we have developed a methodology for hierarchizing grains which involves the use of carbonbased templates.
Resumo:
The conventional control schemes applied to Shunt Active Power Filters (SAPF) are Harmonic extractor-based strategies (HEBSs) because their effectiveness depends on how quickly and accurately the harmonic components of the nonlinear loads are identified. The SAPF can be also implemented without the use of the load harmonic extractors. In this case, the harmonic compensating term is obtained from the system active power balance. These systems can be considered as balanced-energy-based schemes (BEBSs) and their performance depends on how fast the system reaches the equilibrium state. In this case, the phase currents of the power grid are indirectly regulated by double sequence controllers with two degrees of freedom, where the internal model principle is employed to avoid reference frame transformation. Additionally the DSC controller presents robustness when the SAPF is operating under unbalanced conditions. Furthermore, SAPF implemented without harmonic detection schemes compensate simultaneously harmonic distortion and reactive power of the load. Their compensation capabilities, however, are limited by the SAPF power converter rating. Such a restriction can be minimized if the level of the reactive power correction is managed. In this work an estimation scheme for determining the filter currents is introduced to manage the compensation of reactive power. Experimental results are shown for demonstrating the performance of the proposed SAPF system.
Resumo:
Valve stiction, or static friction, in control loops is a common problem in modern industrial processes. Recently, many studies have been developed to understand, reproduce and detect such problem, but quantification still remains a challenge. Since the valve position (mv) is normally unknown in an industrial process, the main challenge is to diagnose stiction knowing only the output signals of the process (pv) and the control signal (op). This paper presents an Artificial Neural Network approach in order to detect and quantify the amount of static friction using only the pv and op information. Different methods for preprocessing the training set of the neural network are presented. Those methods are based on the calculation of centroid and Fourier Transform. The proposal is validated using a simulated process and the results show a satisfactory measurement of stiction.