131 resultados para Processing technique
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.
Resumo:
Objective: this study aimed to develop a nondecalcified bone sample processing technique enabling immunohistochemical labeling of proteins by kappa-beta nuclear factor (NF-kB) utilizing the Technovit 7200 VCR (R) in adult male Wistar rats. Study Method: A 1.8 mm diameter defect was performed 0.5mm from the femur proximal joint by means of a round bur. Experimental groups were divided according to fixing solution prior to histologic processing: Group 1- ethanol 70%; Group 2-10% buffered formalin; and Group 3- Glycerol diluted in 70% ethanol at a 70/30 ratio + 10% buffered formalin. The post-surgical periods ranged from 01 to 24 hours. Control groups included a nonsurgical procedure group (NSPG) and surgical procedures where bone exposure was performed (SPBE) without drilling. Prostate carcinoma was the positive control (PC) and samples subjected to incomplete immunohistochemistry protocol were the negative control (NC). Following euthanization, all samples were kept at 4 degrees C for 7 days, and were dehydrated in a series of alcohols at -20 degrees C. The polymer embedding procedure was performed at ethanol/polymer ratios of 70%-30%, 50%-50%, 30%-70%, 100%, and 100% for 72 hours at -20 degrees C. Polymerization followed the manufacturer`s recommendation. The samples were grounded and polished to 10-15 mu m thickness, and were deacrylated. The sections were rehydrated and were submitted to the primary polyclonal antibody anti-NF-kB on a 1:75 dilution for 12 hours at room temperature. Results: Microscopy showed that the Group 2 presented positive reaction to NF-kB, diffuse reactions for NSPG and SPBE, and no reaction for the NC group. Conclusion: The results obtained support the feasibility of the developed immunohistochemistry technique.
Resumo:
We evaluated the performance of a novel procedure for segmenting mammograms and detecting clustered microcalcifications in two types of image sets obtained from digitization of mammograms using either a laser scanner, or a conventional ""optical"" scanner. Specific regions forming the digital mammograms were identified and selected, in which clustered microcalcifications appeared or not. A remarkable increase in image intensity was noticed in the images from the optical scanner compared with the original mammograms. A procedure based on a polynomial correction was developed to compensate the changes in the characteristic curves from the scanners, relative to the curves from the films. The processing scheme was applied to both sets, before and after the polynomial correction. The results indicated clearly the influence of the mammogram digitization on the performance of processing schemes intended to detect microcalcifications. The image processing techniques applied to mammograms digitized by both scanners, without the polynomial intensity correction, resulted in a better sensibility in detecting microcalcifications in the images from the laser scanner. However, when the polynomial correction was applied to the images from the optical scanner, no differences in performance were observed for both types of images. (C) 2008 SPIE and IS&T [DOI: 10.1117/1.3013544]
Resumo:
Background: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion: The present results support these claims and the neural efficiency hypothesis.
Resumo:
The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
The objective was to study the flow pattern in a plate heat exchanger (PHE) through residence time distribution (RTD) experiments. The tested PHE had flat plates and it was part of a laboratory scale pasteurization unit. Series flow and parallel flow configurations were tested with a variable number of passes and channels per pass. Owing to the small scale of the equipment and the short residence times, it was necessary to take into account the influence of the tracer detection unit on the RID data. Four theoretical RID models were adjusted: combined, series combined, generalized convection and axial dispersion. The combined model provided the best fit and it was useful to quantify the active and dead space volumes of the PHE and their dependence on its configuration. Results suggest that the axial dispersion model would present good results for a larger number of passes because of the turbulence associated with the changes of pass. This type of study can be useful to compare the hydraulic performance of different plates or to provide data for the evaluation of heat-induced changes that occur in the processing of heat-sensitive products. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The present work reports the porous alumina structures fabrication and their quantitative structural characteristics study based on mathematical morphology analysis by using the SEM images. The algorithm used in this work was implemented in 6.2 MATLAB software. Using the algorithm it was possible to obtain the distribution of maximum, minimum and average radius of the pores in porous alumina structures. Additionally, with the calculus of the area occupied by the pores, it was possible to obtain the porosity of the structures. The quantitative results could be obtained and related to the process fabrication characteristics, showing to be reliable and promising to be used to control the pores formation process. Then, this technique could provide a more accurate determination of pore sizes and pores distribution. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Hydrophilic dentin adhesives are prone to water sorption that adversely affects the durability of resin-dentin bonds. This study examined the feasibility of bonding to dentin with hydrophobic resins via the adaptation of electron microscopy tissue processing techniques. Hydrophobic primers were prepared by diluting 2,2-bis[4(2-hydroxy-3-methacryloyloxy-propyloxy)-phenyl] propane/triethyleneglycol dimethacrylate resins with known ethanol concentrations. They were applied to acid-etched moist dentin using an ethanol wet bonding technique that involved: (1) stepwise replacement of water with a series of increasing ethanol concentrations to prevent the demineralized collagen matrix from collapsing; (2) stepwise replacement of the ethanol with different concentrations of hydrophobic primers and subsequently with neat hydrophobic resin. Using the ethanol wet bonding technique, the experimental primer versions with 40, 50, and 75% resin exhibited tensile strengths which were not significantly different from commercially available hydrophilic three-step adhesives that were bonded with water wet bonding technique. The concept of ethanol wet bonding may be explained in terms of solubility parameter theory. This technique is sensitive to water contamination, as depicted by the lower tensile strength results from partial dehydration protocols. The technique has to be further improved by incorporating elements of dentin permeability reduction to avoid water from dentinal tubules contaminating water-free resin blends during bonding. (c) 2007 Wiley Periodicals, Inc. J Biomed Mater Res 84A: 19-29, 2008.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
The present work reports on the thermo-optical properties of photorefractive sillenite Bi(12)SiO(20) (BSO) crystals obtained by applying the Thermal Lens Spectrometry technique (TLS). This crystals presents one high photorefractive sensitivity in the region blue-green spectra, since the measurements were carried out at two pump beam wavelengths (514.5 nm and 750 nm) to study of the light-induced effects in this material (thermal and/or photorefractive). We determine thermo-optical parameters like thermal diffusivity (D), thermal conductivity (K) and temperature coefficient of the optical path length change (ds/dT) in sillenite crystals. These aspects, for what we know, not was studied in details up to now using the lens spectrometry technique and are very important against of the promising potentiality of applications these crystals in non linear optics, real time holography and optical processing data.
Rehabilitation of severely resorbed edentulous mandible using the modified visor osteotomy technique
Resumo:
The prosthetic rehabilitation of an atrophic mandible is usually unsatisfactory due to the lack of support tissues, mainly bone and keratinized mucosa for treatment with osseointegrated implants or even conventional prosthesis. The prosthetic instability leads to social and functional limitations and chronic physical trauma decreasing the patient's quality of life. A 53-year-old female patient sought care at our surgical service complaining of impairment of her masticatory function associated with the instability of the lower total prosthetic denture. The clinical and complementary exams revealed edentulism in both arches, while the mandibular arch presented severe reabsorption resulting in denture instability and chronic trauma to the oral mucosa. The proposed treatment plan consisted in the mandibular rehabilitation with osseointegrated implants and fixed Brånemark's protocol prosthesis after mandibular reconstruction applying the modified visor osteotomy technique. The proposed technique offered predictable results for reconstruction of the severely resorbed edentulous mandible and posterior rehabilitation with osseointegrated implants.
Resumo:
OBJECTIVES: To evaluate the color stability and hardness of two denture liners obtained by direct and indirect techniques, after thermal cycling and immersion in beverages that can cause staining of teeth. MATERIAL AND METHODS: Seventy disc-shaped specimens (18 x 3 mm) processed by direct (DT) and indirect techniques (IT) were made from Elite soft (n=35) and Kooliner (n=35) denture liners. For each material and technique, 10 specimens were subjected to thermal cycling (3,000 cycles) and 25 specimens were stored in water, coffee, tea, soda and red wine for 36 days. The values of color change, Shore A hardness (Elite soft) and Knoop hardness (Kooliner) were obtained. The data were subjected to ANOVA, Tukey's multiple-comparison test, and Kruskal-Wallis test (P<0.05). RESULTS: The thermal cycling promoted a decrease on hardness of Kooliner regardless of the technique used (Initial: 9.09± 1.61; Thermal cycling: 7.77± 1.47) and promoted an increase in the hardness in the DT for Elite Soft (Initial: 40.63± 1.07; Thermal cycling: 43.53± 1.03); hardness of Kooliner (DT: 8.76± 0.95; IT: 7.70± 1.62) and Elite Soft (DT: 42.75± 1.54; IT=39.30± 2.31) from the DT suffered an increase after the immersion in the beverages. The thermal cycling promoted color change only for Kooliner in the IT. Immersion in the beverages did not promote color change for Elite in both techniques. The control group of the DT of Kooliner showed a significant color change. Wine and coffee produced the greatest color change in the DT only for Elite Soft when compared to the other beverages. CONCLUSION: The three variation factors promoted alteration on hardness and color of the tested denture lining materials.
Resumo:
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.