946 resultados para Pre-processing step


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study a cognitive radio scenario in which the network of sec- ondary users wishes to identify which primary user, if any, is trans- mitting. To achieve this, the nodes will rely on some form of location information. In our previous work we proposed two fully distributed algorithms for this task, with and without a pre-detection step, using propagation parameters as the only source of location information. In a real distributed deployment, each node must estimate its own po- sition and/or propagation parameters. Hence, in this work we study the effect of uncertainty, or error in these estimates on the proposed distributed identification algorithms. We show that the pre-detection step significantly increases robustness against uncertainty in nodes' locations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La presente Tesis analiza y desarrolla metodología específica que permite la caracterización de sistemas de transmisión acústicos basados en el fenómeno del array paramétrico. Este tipo de estructuras es considerado como uno de los sistemas más representativos de la acústica no lineal con amplias posibilidades tecnológicas. Los arrays paramétricos aprovechan la no linealidad del medio aéreo para obtener en recepción señales en el margen sónico a partir de señales ultrasónicas en emisión. Por desgracia, este procedimiento implica que la señal transmitida y la recibida guardan una relación compleja, que incluye una fuerte ecualización así como una distorsión apreciable por el oyente. Este hecho reduce claramente la posibilidad de obtener sistemas acústicos de gran fidelidad. Hasta ahora, los esfuerzos tecnológicos dirigidos al diseño de sistemas comerciales han tratado de paliar esta falta de fidelidad mediante técnicas de preprocesado fuertemente dependientes de los modelos físicos teóricos. Estos están basados en la ecuación de propagación de onda no lineal. En esta Tesis se propone un nuevo enfoque: la obtención de una representación completa del sistema mediante series de Volterra que permita inferir un sistema de compensación computacionalmente ligero y fiable. La dificultad que entraña la correcta extracción de esta representación obliga a desarrollar una metodología completa de identificación adaptada a este tipo de estructuras. Así, a la hora de aplicar métodos de identificación se hace indispensable la determinación de ciertas características iniciales que favorezcan la parametrización del sistema. En esta Tesis se propone una metodología propia que extrae estas condiciones iniciales. Con estos datos, nos encontramos en disposición de plantear un sistema completo de identificación no lineal basado en señales pseudoaleatorias, que aumenta la fiabilidad de la descripción del sistema, posibilitando tanto la inferencia de la estructura basada en bloques subyacente, como el diseño de mecanismos de compensación adecuados. A su vez, en este escenario concreto en el que intervienen procesos de modulación, factores como el punto de trabajo o las características físicas del transductor, hacen inviables los algoritmos de caracterización habituales. Incluyendo el método de identificación propuesto. Con el fin de eliminar esta problemática se propone una serie de nuevos algoritmos de corrección que permiten la aplicación de la caracterización. Las capacidades de estos nuevos algoritmos se pondrán a prueba sobre un prototipo físico, diseñado a tal efecto. Para ello, se propondrán la metodología y los mecanismos de instrumentación necesarios para llevar a cabo el diseño, la identificación del sistema y su posible corrección, todo ello mediante técnicas de procesado digital previas al sistema de transducción. Los algoritmos se evaluarán en términos de error de modelado a partir de la señal de salida del sistema real frente a la salida sintetizada a partir del modelo estimado. Esta estrategia asegura la posibilidad de aplicar técnicas de compensación ya que éstas son sensibles a errores de estima en módulo y fase. La calidad del sistema final se evaluará en términos de fase, coloración y distorsión no lineal mediante un test propuesto a lo largo de este discurso, como paso previo a una futura evaluación subjetiva. ABSTRACT This Thesis presents a specific methodology for the characterization of acoustic transmission systems based on the parametric array phenomenon. These structures are well-known representatives of the nonlinear acoustics field and display large technological opportunities. Parametric arrays exploit the nonlinear behavior of air to obtain sonic signals at the receptors’side, which were generated within the ultrasonic range. The underlying physical process redunds in a complex relationship between the transmitted and received signals. This includes both a strong equalization and an appreciable distortion for a human listener. High fidelity, acoustic equipment based on this phenomenon is therefore difficult to design. Until recently, efforts devoted to this enterprise have focused in fidelity enhancement based on physically-informed, pre-processing schemes. These derive directly from the nonlinear form of the wave equation. However, online limited enhancement has been achieved. In this Thesis we propose a novel approach: the evaluation of a complete representation of the system through its projection onto the Volterra series, which allows the posterior inference of a computationally light and reliable compensation scheme. The main difficulty in the derivation of such representation strives from the need of a complete identification methodology, suitable for this particular type of structures. As an example, whenever identification techniques are involved, we require preliminary estimates on certain parameters that contribute to the correct parameterization of the system. In this Thesis we propose a methodology to derive such initial values from simple measures. Once these information is made available, a complete identification scheme is required for nonlinear systems based on pseudorandom signals. These contribute to the robustness and fidelity of the resulting model, and facilitate both the inference of the underlying structure, which we subdivide into a simple block-oriented construction, and the design of the corresponding compensation structure. In a scenario such as this where frequency modulations occur, one must control exogenous factors such as devices’ operation point and the physical properties of the transducer. These may conflict with the principia behind the standard identification procedures, as it is the case. With this idea in mind, the Thesis includes a series of novel correction algorithms that facilitate the application of the characterization results onto the system compensation. The proposed algorithms are tested on a prototype that was designed and built for this purpose. The methodology and instrumentation required for its design, the identification of the overall acoustic system and its correction are all based on signal processing techniques, focusing on the system front-end, i.e. prior to transduction. Results are evaluated in terms of input-output modelling error, considering a synthetic construction of the system. This criterion ensures that compensation techniques may actually be introduced, since these are highly sensible to estimation errors both on the envelope and the phase of the signals involved. Finally, the quality of the overall system will be evaluated in terms of phase, spectral color and nonlinear distortion; by means of a test protocol specifically devised for this Thesis, as a prior step for a future, subjective quality evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Os métodos de ondas superficiais com ênfase nas ondas Rayleigh foram utilizados como o núcleo desse trabalho de Doutorado. Inicialmente, as ondas Rayleigh foram modeladas permitindo o estudo de sensibilidade de suas curvas de dispersão sob diferentes configurações de parâmetros físicos representando diversos modelos de camadas, em que pôde ser observado parâmetros com maior e menor sensibilidade e também alguns efeitos provocados por baixas razões de Poisson. Além disso, na fase de inversão dos dados a modelagem das ondas Rayleigh foi utilizada para a construção da função objeto, que agregada ao método de mínimos quadrados, a partir do método de Levenberg-Marquardt, permitiu a implementação de um algoritmo de busca local responsável pela inversão de dados das ondas superficiais. Por se tratar de um procedimento de busca local, o algoritmo de inversão foi complementado por uma etapa de pré-inversão com a geração de um modelo inicial para que o procedimento de inversão fosse mais rápido e eficiente. Visando uma eficiência ainda maior do procedimento de inversão, principalmente em modelos de camadas com inversão de velocidades, foi implementado um algoritmo de pós-inversão baseado em um procedimento de tentativa e erro minimizando os valores relativos da raiz quadrada do erro quadrático médio (REQMr) da inversão dos dados. Mais de 50 modelos de camadas foram utilizados para testar a modelagem, a pré-inversão, inversão e pós-inversão dos dados permitindo o ajuste preciso de parâmetros matemáticos e físicos presentes nos diversos scripts implementados em Matlab. Antes de inverter os dados adquiridos em campo, os mesmos precisaram ser tratados na etapa de processamento de dados, cujo objetivo principal é a extração da curva de dispersão originada devido às ondas superficiais. Para isso, foram implementadas, também em Matlab, três metodologias de processamento com abordagens matemáticas distintas. Essas metodologias foram testadas e avaliadas com dados sintéticos e reais em que foi possível constatar as virtudes e deficiências de cada metodologia estudada, bem como as limitações provocadas pela discretização dos dados de campo. Por último, as etapas de processamento, pré-inversão, inversão e pós-inversão dos dados foram unificadas para formar um programa de tratamento de dados de ondas superficiais (Rayleigh). Ele foi utilizado em dados reais originados pelo estudo de um problema geológico na Bacia de Taubaté em que foi possível mapear os contatos geológicos ao longo dos pontos de aquisição sísmica e compará-los a um modelo inicial existente baseado em observações geomorfológicas da área de estudos, mapa geológico da região e informações geológicas globais e locais dos movimentos tectônicos na região. As informações geofísicas associadas às geológicas permitiram a geração de um perfil analítico da região de estudos com duas interpretações geológicas confirmando a suspeita de neotectônica na região em que os contatos geológicos entre os depósitos Terciários e Quaternários foram identificados e se encaixaram no modelo inicial de hemi-graben com mergulho para Sudeste.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Frequent Itemsets mining is well explored for various data types, and its computational complexity is well understood. There are methods to deal effectively with computational problems. This paper shows another approach to further performance enhancements of frequent items sets computation. We have made a series of observations that led us to inventing data pre-processing methods such that the final step of the Partition algorithm, where a combination of all local candidate sets must be processed, is executed on substantially smaller input data. The paper shows results from several experiments that confirmed our general and formally presented observations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Currently the data storage industry is facing huge challenges with respect to the conventional method of recording data known as longitudinal magnetic recording. This technology is fast approaching a fundamental physical limit, known as the superparamagnetic limit. A unique way of deferring the superparamagnetic limit incorporates the patterning of magnetic media. This method exploits the use of lithography tools to predetermine the areal density. Various nanofabrication schemes are employed to pattern the magnetic material are Focus Ion Beam (FIB), E-beam Lithography (EBL), UV-Optical Lithography (UVL), Self-assembled Media Synthesis and Nanoimprint Lithography (NIL). Although there are many challenges to manufacturing patterned media, the large potential gains offered in terms of areal density make it one of the most promising new technologies on the horizon for future hard disk drives. Thus, this dissertation contributes to the development of future alternative data storage devices and deferring the superparamagnetic limit by designing and characterizing patterned magnetic media using a novel nanoimprint replication process called "Step and Flash Imprint lithography". As opposed to hot embossing and other high temperature-low pressure processes, SFIL can be performed at low pressure and room temperature. Initial experiments carried out, consisted of process flow design for the patterned structures on sputtered Ni-Fe thin films. The main one being the defectivity analysis for the SFIL process conducted by fabricating and testing devices of varying feature sizes (50 nm to 1 μm) and inspecting them optically as well as testing them electrically. Once the SFIL process was optimized, a number of Ni-Fe coated wafers were imprinted with a template having the patterned topography. A minimum feature size of 40 nm was obtained with varying pitch (1:1, 1:1.5, 1:2, and 1:3). The Characterization steps involved extensive SEM study at each processing step as well as Atomic Force Microscopy (AFM) and Magnetic Force Microscopy (MFM) analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sub-optimal recovery of bacterial DNA from whole blood samples can limit the sensitivity of molecular assays to detect pathogenic bacteria. We compared 3 different pre-lysis protocols (none, mechanical pre-lysis and achromopeptidasepre-lysis) and 5 commercially available DNA extraction platforms for direct detection of Group B Streptococcus (GBS) in spiked whole blood samples, without enrichment culture. DNA was extracted using the QIAamp Blood Mini kit (Qiagen), UCP Pathogen Mini kit (Qiagen), QuickGene DNA Whole Blood kit S (Fuji), Speed Xtract Nucleic Acid Kit 200 (Qiagen) and MagNA Pure Compact Nucleic Acid Isolation Kit I (Roche Diagnostics Corp). Mechanical pre-lysis increased yields of bacterial genomic DNA by 51.3 fold (95% confidence interval; 31.6–85.1, p < 0.001) and pre-lysis with achromopeptidase by 6.1 fold (95% CI; 4.2–8.9, p < 0.001), compared with no pre-lysis. Differences in yield dueto pre-lysis were 2–3 fold larger than differences in yield between extraction methods. Including a pre-lysis step can improve the limits of detection of GBS using PCR or other molecular methods without need for culture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work proposes a method based on both preprocessing and data mining with the objective of identify harmonic current sources in residential consumers. In addition, this methodology can also be applied to identify linear and nonlinear loads. It should be emphasized that the entire database was obtained through laboratory essays, i.e., real data were acquired from residential loads. Thus, the residential system created in laboratory was fed by a configurable power source and in its output were placed the loads and the power quality analyzers (all measurements were stored in a microcomputer). So, the data were submitted to pre-processing, which was based on attribute selection techniques in order to minimize the complexity in identifying the loads. A newer database was generated maintaining only the attributes selected, thus, Artificial Neural Networks were trained to realized the identification of loads. In order to validate the methodology proposed, the loads were fed both under ideal conditions (without harmonics), but also by harmonic voltages within limits pre-established. These limits are in accordance with IEEE Std. 519-1992 and PRODIST (procedures to delivery energy employed by Brazilian`s utilities). The results obtained seek to validate the methodology proposed and furnish a method that can serve as alternative to conventional methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents a solid-like finite element formulation to solve geometric non-linear three-dimensional inhomogeneous frames. To achieve the desired representation, unconstrained vectors are used instead of the classic rigid director triad; as a consequence, the resulting formulation does not use finite rotation schemes. High order curved elements with any cross section are developed using a full three-dimensional constitutive elastic relation. Warping and variable thickness strain modes are introduced to avoid locking. The warping mode is solved numerically in FEM pre-processing computational code, which is coupled to the main program. The extra calculations are relatively small when the number of finite elements. with the same cross section, increases. The warping mode is based on a 2D free torsion (Saint-Venant) problem that considers inhomogeneous material. A scheme that automatically generates shape functions and its derivatives allow the use of any degree of approximation for the developed frame element. General examples are solved to check the objectivity, path independence, locking free behavior, generality and accuracy of the proposed formulation. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of a lipase-rich fungal enzymatic preparation, produced by a Penicillium sp. during solid-state fermentation, was evaluated in an anaerobic digester treating dairy wastewater with 1200 mg of oil and grease/L The oil and grease hydrolysis step was carried out with 0.1% (w/v) of solid enzymatic preparation at 30 degrees C for 24 h, and resulted in a final free acid concentration eight times higher than the initial value. The digester operated in sequential batches of 48 h at 30 degrees C for 245 days, and had high chemical oxygen demand (COD) removal efficiencies (around 90%) when fed with pre-hydrolyzed wastewater. However, when the pre-hydrolysis step was removed, the anaerobic digester performed poorly (with an average COD removal of 32%), as the oil and grease accumulated in the biomass and effluent oil and grease concentration increased throughout the operational period. PCR-DGGE analysis of the Bacteria and Archaea domains revealed remarkable differences in the microbial profiles in trials conducted with and without the pre-hydrolysis step, indicating that differences observed in overall parameters were intrinsically related to the microbial diversity of the anaerobic sludge. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work studied the radiation resistance of Listeria monocytogenes and Salmonella species and the effect of irradiation on leaf flavonoid content and sensory acceptability of minimally processed arugula. Immersion in ozone-treated water reduced the analyzed microorganisms by 1 log. L. monocytogenes and Salmonella were not isolated from samples. Samples of this vegetable were inoculated with a cocktail of Salmonella spp. and L. monocytogenes and exposed to gamma irradiation. D-10 values for Salmonella ranged from 0.16 to 0.19 kGy and for L. monocytogenes from 0.37 to 0.48 kGy. Kaempferol glycoside levels were 4 and ca. 3 times higher in samples exposed to 1 and 2 kGy, respectively, than in control samples. An increase in quercetin glycoside was also observed mainly in samples exposed to 1 kGy. In sensory evaluation, arugula had good acceptability, even after exposure to 2 and 4 kGy. These results indicate that irradiation has potential as a practical processing step to improve the safety of arugula.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A absorção de água por carcaças de frango na etapa de pré-resfriamento da linha abate representa uma característica de qualidade importante relacionada ao rendimento do produto final. Uma forma de manter o padrão de qualidade de um produto é garantir que as etapas do processo sejam estáveis e replicáveis. Ao empregar o Controle Estatístico de Processo (CEP) é possível obter estabilidade e melhorias nos processos, por meio da redução da variabilidade. Neste contexto, o objetivo deste trabalho foi a aplicação de gráficos de controle, análise de correlação, estatística descritiva, testes de hipóteses e regressão linear múltipla na linha de abate de um abatedouro-frigorífico de aves para monitorar a variabilidade da absorção de água pelas carcaças de frango após a etapa de pré-resfriamento. Como resultado, verificou-se que o teor de absorção de água das carcaças de frango apresentou elevada variabilidade, sendo que 10% (8/80) das carcaças apresentaram absorção de água superior ao limite de 8% definido pela legislação brasileira. Do total de 16 variáveis de entrada analisadas, as mais impactantes no teor de absorção de água foram o “tempo de retenção da carcaça no pré-chiller” e o “tempo de espera da carcaça após a etapa de gotejamento”. Entretanto, o modelo de regressão obtido apresentou baixa correlação (R²=0,16) que foi associada à elevada variabilidade da variável-resposta. Os resultados da estatística descritiva demonstraram que as variáveis de entrada também apresentaram elevada variabilidade, com coeficiente de variação entre 7,95 e 63,5%. Verificou-se, pela análise dos gráficos de controle de medida individual e da amplitude móvel, que 15 das 16 variáveis de entrada se apresentaram fora de controle estatístico assim como a variável-resposta. Baseado no fluxograma e na descrição das etapas da linha de abate, previamente realizados, atribuiu-se à falta de padronização na condução das etapas e de procedimentos para o controle de qualidade das operações na linha de abate como fatores relevantes que poderiam estar associados à presença de causas especiais no processo. Concluiu-se que para reduzir a elevada variabilidade das variáveis e eliminar as causas especiais presentes são necessários ajustes operacionais para, dessa forma, obter um processo mais estável e mais uniforme garantindo o padrão de qualidade das carcaças de frango em relação ao teor de absorção de água.