967 resultados para Processing methods
Resumo:
Monitoring of sewage sludge has proved the presence of many polar anthropogenic pollutants since LC/MS techniques came into routine use. While advanced techniques may improve characterizations, flawed sample processing procedures, however, may disturb or disguise the presence and fate of many target compounds present in this type of complex matrix before analytical process starts. Freeze-drying or oven-drying, in combination with centrifugation or filtration as sample processing techniques were performed followed by visual pattern recognition of target compounds for assessment of pretreatment processes. The results shown that oven-drying affected the sludge characterization, while freeze-drying led to less analytical misinterpretations.
Resumo:
Amaranth has attracted a great deal of interest in recent decades due to its valuable nutritional, functional, and agricultural characteristics. Amaranth seeds can be cooked, popped, roasted, flaked, or extruded for consumption. This study compared the in vitro starch digestibility of processed amaranth seeds to that of white bread. Raw seeds yielded rapidly digestible starch content (RDS) of 30.7% db and predicted glycemic index (pGI) of 87.2, the lowest among the studied products. Cooked, extruded, and popped amaranth seeds had starch digestibility similar to that of white bread (92.4, 91.2, and 101.3, respectively), while flaked and roasted seeds generated a slightly increased glycemic response (106.0 and 105.8, respectively). Cooking and extrusion did not alter the RDS contents of the seeds. No significant differences were observed among popped, flaked, and roasted RDS contents (38.0%,46.3%, and 42.9%, respectively), which were all lower than RDS content of bread (51.1%). Amaranth seed is a high glycemic food most likely because of its small starch granule size, low resistant starch content (< 1%), and tendency to completely lose its crystalline and granular starch structure during those heat treatments.
Resumo:
Patients with congenital malformations, traumatic or pathological mutilation and maxillofacial developmental disorders can be restored aesthetically and emotionally by the production and use of facial prostheses. The aim of this study was to review the literature about the retention and processing methods of facial prostheses, and discuss their characteristics. A literature review on Medline (PubMed) database was performed by using the keywords maxillofacial prosthesis, silicone, resin, pigment, cosmetic, prosthetic nose, based on articles published from 1956 to 2010. Several methods of retention, from adhesives to the placement of implants, and different processing methods such as laser, CAD/CAM and rapid prototyping technologies have been reported. There are advantages and disadvantages of each procedure, and none can be classified as better compared to others.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
One of the main objectives of this study was to functionalise various rubbers (i.e. ethylene propylene copolymer (EP), ethylene propylene diene terpolymer (EPDM), and natural rubber (NR)) using functional monomers, maleic anhydride (MA) and glycidyl methacrylate (GMA), via reactive processing routes. The functionalisation of the rubber was carried out via different reactive processing methods in an internal mixer. GMA was free-radically grafted onto EP and EPDM in the melt state in the absence and presence of a comonomer, trimethylolpropane triacrylate (TRlS). To optinuse the grafting conditions and the compositions, the effects of various paranleters on the grafting yields and the extent of side reactions were investigated. Precipitation method and Soxhlet extraction method was established to purifY the GMA modified rubbers and the grafting degree was determined by FTIR and titration. It was found that without TRlS the grafting degree of GMA increased with increasing peroxide concentration. However, grafting was low and the homopolymerisation of GMA and crosslinking of the polymers were identified as the main side reactions competing with the desired grafting reaction for EP and EPDM, respectively. The use of the tri-functional comonomer, TRlS, was shown to greatly enhance the GMA grafting and reduce the side reactions in terms of the higher GMA grafting degree, less alteration of the rheological properties of the polymer substrates and very little formation of polyGMA. The grafting mechanisms were investigated. MA was grafted onto NR using both thermal initiation and peroxide initiation. The results showed clearly that the reaction of MA with NR could be thermally initiated above 140°C in the absence of peroxide. At a preferable temperature of 200°C, the grafting degree was increased with increasing MA concentration. The grafting reaction could also be initiated with peroxide. It was found that 2,5-dimethyl-2,5-bis(ter-butylproxy) hexane (TIOI) was a suitable peroxide to initiate the reaction efficiently above I50°C. The second objective of the work was to utilize the functionalised rubbers in a second step to achieve an in-situ compatibilisation of blends based on poly(ethylene terephthalate) (PET), in particular, with GMA-grafted-EP and -EPDM and the reactive blending was carried out in an internal mixer. The effects of GMA grafting degree, viscosities of GMAgrafted- EP and -EPDM and the presence of polyGMA in the rubber samples on the compatibilisation of PET blends in terms of morphology, dynamical mechanical properties and tensile properties were investigated. It was found that the GMA modified rubbers were very efficient in compatibilising the PET blends and this was supported by the much finer morphology and the better tensile properties. The evidence obtained from the analysis of the PET blends strongly supports the existence of the copolymers through the interfacial reactions between the grafted epoxy group in the GMA modified rubber and the terminal groups of PET in the blends.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.
Resumo:
In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.