981 resultados para Geometry processing
Resumo:
Nowadays, L1 SBAS signals can be used in a combined GPS+SBAS data processing. However, such situation restricts the studies over short baselines. Besides of increasing the satellite availability, SBAS satellites orbit configuration is different from that of GPS. In order to analyze how these characteristics can impact GPS positioning in the southeast area of Brazil, experiments involving GPS-only and combined GPS+SBAS data were performed. Solutions using single point and relative positioning were computed to show the impact over satellite geometry, positioning accuracy and short baseline ambiguity resolution. Results showed that the inclusion of SBAS satellites can improve the accuracy of positioning. Nevertheless, the bad quality of the data broadcasted by these satellites limits their usage. © Springer-Verlag Berlin Heidelberg 2012.
Resumo:
The main objective of this research work was to obtain two formulations of ablative composites. These composites are also known as ablative structural composites, for applications in atmospherically severe conditions according to the high-temperature, hot gaseous products flow generated from the burning of solid propellants. The formulations were manufactured with phenolic resin reinforced with chopped carbon fiber. The composites were obtained by the hot compression molding technique. Another purpose of this work was to conduct the physical and chemical characterization of the matrix, the reinforcements and the composites. After the characterization, a nozzle divergent of each formulation was manufactured and its performance was evaluated through the rocket motor static firing test. According to the results found in this work, it was possible to observe through the characterization of the raw materials that phenolic resins showed peculiarities in their properties that differentiate one from the other, but did not exhibit significant differences in performance as a composite material for use in ablation conditions. Both composites showed good performance for use in thermal protection, confirmed by firing static tests (rocket motor). Composites made with phenolic resin and chopped carbon fiber showed that it is a material with excellent resistance to ablation process. This composite can be used to produce nozzle parts with complex geometry or shapes and low manufacturing cost.
Resumo:
Ferro- or piezoelectrets are dielectric materials with two elastically very different macroscopic phases and electrically charged interfaces between them. One of the newer piezoelectret variants is a system of two fluoroethylenepropylene (FEP) films that are first laminated around a polytetrafluoroethylene (PTFE) template. Then, by removing the PTFE template, a two-layer FEP structure with open tubular channels is obtained. After electrical charging, the channels form easily deformable macroscopic electric dipoles whose changes under mechanical or electrical stress lead to significant direct or inverse piezoelectricity, respectively. Here, different PTFE templates are employed to generate channel geometries that vary in height or width. It is shown that the control of the channel geometry allows a direct adjustment of the resonance frequencies in the tubular-channel piezoelectrets. By combining several different channel widths in a single ferroelectret, it is possible to obtain multiple resonance peaks that may lead to a rather flat frequency-response region of the transducer material. A phenomenological relation between the resonance frequency and the geometrical parameters of a tubular channel is also presented. This relation may help to design piezoelectrets with a specific frequency response.
Resumo:
Ein System in einem metastabilen Zustand muss eine bestimmte Barriere in derrnfreien Energie überwinden um einen Tropfen der stabilen Phase zu formen.rnHerkömmliche Untersuchungen nehmen hierbei kugelförmige Tropfen an. Inrnanisotropen Systemen (wie z.B. Kristallen) ist diese Annahme aber nicht ange-rnbracht. Bei tiefen Temperaturen wirkt sich die Anisotropie des Systems starkrnauf die freie Energie ihrer Oberfläche aus. Diese Wirkung wird oberhalb derrnAufrauungstemperatur T R schwächer. Das Ising-Modell ist ein einfaches Mo-rndell, welches eine solche Anisotropie aufweist. Wir führen großangelegte Sim-rnulationen durch, um die Effekte, die mit einer endlichen Simulationsbox ein-rnhergehen, sowie statistische Ungenauigkeiten möglichst klein zu halten. DasrnAusmaß der Simulationen die benötigt werden um sinnvolle Ergebnisse zu pro-rnduzieren, erfordert die Entwicklung eines skalierbaren Simulationsprogrammsrnfür das Ising-Modell, welcher auf verschiedenen parallelen Architekturen (z.B.rnGrafikkarten) verwendet werden kann. Plattformunabhängigkeit wird durch ab-rnstrakte Schnittstellen erreicht, welche plattformspezifische Implementierungs-rndetails verstecken. Wir benutzen eine Systemgeometrie die es erlaubt eine Ober-rnfläche mit einem variablen Winkel zur Kristallebene zu untersuchen. Die Ober-rnfläche ist in Kontakt mit einer harten Wand, wobei der Kontaktwinkel Θ durchrnein Oberflächenfeld eingestellt werden kann. Wir leiten eine Differenzialglei-rnchung ab, welche das Verhalten der freien Energie der Oberfläche in einemrnanisotropen System beschreibt. Kombiniert mit thermodynamischer Integrationrnkann die Gleichung benutzt werden, um die anisotrope Oberflächenspannungrnüber einen großen Winkelbereich zu integrieren. Vergleiche mit früheren Mes-rnsungen in anderen Geometrien und anderen Methoden zeigen hohe Überein-rnstimung und Genauigkeit, welche vor allem durch die im Vergleich zu früherenrnMessungen wesentlich größeren Simulationsdomänen erreicht wird. Die Temper-rnaturabhängigkeit der Oberflächensteifheit κ wird oberhalb von T R durch diernKrümmung der freien Energie der Oberfläche für kleine Winkel gemessen. DiesernMessung lässt sich mit Simulationsergebnissen in der Literatur vergleichen undrnhat bessere Übereinstimmung mit theoretischen Voraussagen über das Skalen-rnverhalten von κ. Darüber hinaus entwickeln wir ein Tieftemperatur-Modell fürrndas Verhalten um Θ = 90 Grad weit unterhalb von T R. Der Winkel bleibt bis zu einemrnkritischen Feld H C quasi null; oberhalb des kritischen Feldes steigt der Winkelrnrapide an. H C wird mit der freien Energie einer Stufe in Verbindung gebracht,rnwas es ermöglicht, das kritische Verhalten dieser Größe zu analysieren. Die harternWand muss in die Analyse einbezogen werden. Durch den Vergleich freier En-rnergien bei geschickt gewählten Systemgrößen ist es möglich, den Beitrag derrnKontaktlinie zur freien Energie in Abhängigkeit von Θ zu messen. Diese Anal-rnyse wird bei verschiedenen Temperaturen durchgeführt. Im letzten Kapitel wirdrneine 2D Fluiddynamik Simulation für Grafikkarten parallelisiert, welche u. a.rnbenutzt werden kann um die Dynamik der Atmosphäre zu simulieren. Wir im-rnplementieren einen parallelen Evolution Galerkin Operator und erreichen
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
We report on a comprehensive signal processing procedure for very low signal levels for the measurement of neutral deuterium in the local interstellar medium from a spacecraft in Earth orbit. The deuterium measurements were performed with the IBEX-Lo camera on NASA’s Interstellar Boundary Explorer (IBEX) satellite. Our analysis technique for these data consists of creating a mass relation in three-dimensional time of flight space to accurately determine the position of the predicted D events, to precisely model the tail of the H events in the region where the H tail events are near the expected D events, and then to separate the H tail from the observations to extract the very faint D signal. This interstellar D signal, which is expected to be a few counts per year, is extracted from a strong terrestrial background signal, consisting of sputter products from the sensor’s conversion surface. As reference we accurately measure the terrestrial D/H ratio in these sputtered products and then discriminate this terrestrial background source. During the three years of the mission time when the deuterium signal was visible to IBEX, the observation geometry and orbit allowed for a total observation time of 115.3 days. Because of the spinning of the spacecraft and the stepping through eight energy channels the actual observing time of the interstellar wind was only 1.44 days. With the optimised data analysis we found three counts that could be attributed to interstellar deuterium. These results update our earlier work.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Investigation of the Effect of Array Geometry on the Performance of Free-Space Optical Interconnects
Resumo:
The effect of transmitter and receiver array configurations on the stray-light and diffraction-caused crosstalk in free-space optical interconnects was investigated. The optical system simulation software (Code V) is used to simulate both the stray-light and diffraction-caused crosstalk. Experimentally measured, spectrally-resolved, near-field images of VCSEL higher order modes were used as extended sources in our simulation model. In addition, we have included the electrical and optical noise in our analysis to give more accurate overall performance of the FSOI system. Our results show that by changing the square lattice geometry to a hexagonal configuration, we obtain an overall signal-to-noise ratio improvement of 3 dB. Furthermore, system density is increased by up to 4 channels/mm2.
Resumo:
Whey proteins may be fractionated by isoelectric precipitation followed by centrifugal recovery of the precipitate phase. Transport and processing of protein precipitates may alter the precipitate particle properties, which may affect how they behave in subsequent processes. For example, the transport of precipitate solution through pumps, pipes and valves and into a centrifugal separator may cause changes in particle size and density, which may affect the performance of the separator. This work investigates the effect of fluid flow intensity, flow geometry and exposure time on the breakage of whey protein precipitates: Computational fluid dynamics (CFD) was used to quantify the flow intensity in different geometries. Flow geometry can have a critical impact on particle breakage. Sharp geometrical transitions induce large increases in turbulence that can result in substantial particle breakage. As protein precipitate particles break, they tend to form denser more compact structures. The reduction in particle size and increase in compaction is due to breakage. This makes the particles become more resistant to further breakage as particle compactness increases. The effect of flow intensity on particle breakage is coupled to exposure time, with greater exposure time producing more breakage. However, it is expected that the particles will attain an equilibrium particle size and density after prolonged exposure in a constant flow field where no further breakage will occur with exposure time. © 2005 Institution of Chemical Engineers.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
The aim of this study was to evaluate fat substitute in processing of sausages prepared with surimi of waste from piramutaba filleting. The formulation ingredients were mixed with the fat substitutes added according to a fractional planning 2(4-1), where the independent variables, manioc starch (Ms), hydrogenated soy fat (F), texturized soybean protein (Tsp) and carrageenan (Cg) were evaluated on the responses of pH, texture (Tx), raw batter stability (RBS) and water holding capacity (WHC) of the sausage. Fat substitutes were evaluated in 11 formulations and the results showed that the greatest effects on the responses were found to Ms, F and Cg, being eliminated from the formulation Tsp. To find the best formulation for processing piramutaba sausage was made a complete factorial planning of 2(3) to evaluate the concentrations of fat substitutes in an enlarged range. The optimum condition found for fat substitutes in the sausages formulation were carrageenan (0.51%), manioc starch (1.45%) and fat (1.2%).
Resumo:
To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. 23 children (13 male) between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure); dichotic digit test and staggered spondaic word test (selective attention); pitch pattern and duration pattern sequence tests (temporal processing) and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Resumo:
The aim of this research was to analyze temporal auditory processing and phonological awareness in school-age children with benign childhood epilepsy with centrotemporal spikes (BECTS). Patient group (GI) consisted of 13 children diagnosed with BECTS. Control group (GII) consisted of 17 healthy children. After neurological and peripheral audiological assessment, children underwent a behavioral auditory evaluation and phonological awareness assessment. The procedures applied were: Gaps-in-Noise test (GIN), Duration Pattern test, and Phonological Awareness test (PCF). Results were compared between the groups and a correlation analysis was performed between temporal tasks and phonological awareness performance. GII performed significantly better than the children with BECTS (GI) in both GIN and Duration Pattern test (P < 0.001). GI performed significantly worse in all of the 4 categories of phonological awareness assessed: syllabic (P = 0.001), phonemic (P = 0.006), rhyme (P = 0.015) and alliteration (P = 0.010). Statistical analysis showed a significant positive correlation between the phonological awareness assessment and Duration Pattern test (P < 0.001). From the analysis of the results, it was concluded that children with BECTS may have difficulties in temporal resolution, temporal ordering, and phonological awareness skills. A correlation was observed between auditory temporal processing and phonological awareness in the suited sample.
Resumo:
The biofilm formation of Enterococcus faecalis and Enterococcus faecium isolated from the processing of ricotta on stainless steel coupons was evaluated, and the effect of cleaning and sanitization procedures in the control of these biofilms was determined. The formation of biofilms was observed while varying the incubation temperature (7, 25 and 39°C) and time (0, 1, 2, 4, 6 and 8days). At 7°C, the counts of E. faecalis and E. faecium were below 2log10CFU/cm(2). For the temperatures of 25 and 39°C, after 1day, the counts of E. faecalis and E. faecium were 5.75 and 6.07log10CFU/cm(2), respectively, which is characteristic of biofilm formation. The tested sanitation procedures a) acid-anionic tensioactive cleaning, b) anionic tensioactive cleaning+sanitizer and c) acid-anionic tensioactive cleaning+sanitizer were effective in removing the biofilms, reducing the counts to levels below 0.4log10CFU/cm(2). The sanitizer biguanide was the least effective, and peracetic acid was the most effective. These studies revealed the ability of enterococci to form biofilms and the importance of the cleaning step and the type of sanitizer used in sanitation processes for the effective removal of biofilms.
Resumo:
The objective of this study was to evaluate children's respiratory patterns in the mixed dentition, by means of acoustic rhinometry, and its relation to the upper arch width development. Fifty patients were examined, 25 females and 25 males with mean age of eight years and seven months. All of them were submitted to acoustic rhinometry and upper and lower arch impressions to obtain plaster models. The upper arch analysis was accomplished by measuring the interdental transverse distance of the upper teeth, deciduous canines (measurement 1), deciduous first molars (measurement 2), deciduous second molars (measurement 3) and the first molars (measurement 4). The results showed that an increased left nasal cavity area in females means an increased interdental distance of the deciduous first molars and deciduous second molars and an increased interdental distance of the deciduous canines, deciduous first and second molars in males. It was concluded that there is a correlation between the nasal cavity area and the upper arch transverse distance in the anterior and mid maxillary regions for both genders.