959 resultados para processing method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Absolute abundances (concentrations) of dinoflagellate cysts are often determined through the addition of Lycopodium clavatum marker-grains as a spike to a sample before palynological processing. An inter-laboratory calibration exercise was set up in order to test the comparability of results obtained in different laboratories, each using its own preparation method. Each of the 23 laboratories received the same amount of homogenized splits of four Quaternary sediment samples. The samples originate from different localities and consisted of a variety of lithologies. Dinoflagellate cysts were extracted and counted, and relative and absolute abundances were calculated. The relative abundances proved to be fairly reproducible, notwithstanding a need for taxonomic calibration. By contrast, excessive loss of Lycopodium spores during sample preparation resulted in non-reproducibility of absolute abundances. Use of oxidation, KOH, warm acids, acetolysis, mesh sizes larger than 15 µm and long ultrasonication (> 1 min) must be avoided to determine reproducible absolute abundances. The results of this work therefore indicate that the dinoflagellate cyst worker should make a choice between using the proposed standard method which circumvents critical steps, adding Lycopodium tablets at the end of the preparation and using an alternative method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combinação da Moldagem por Injeção de pós Metálicos (Metal Injection Moulding MIM) e o Método do Retentor Espacial (Space Holder Method - SHM) é uma técnica promissora para fabricação de peças porosas de titânio com porosidade bem definida como implantes biomédicos, uma vez que permite um alto grau de automatização e redução dos custos de produção em larga escala quando comparado a técnica tradicional (SHM e usinagem a verde). Contudo a aplicação desta técnica é limitada pelo fato que há o fechamento parcial da porosidade na superfície das amostras, levando ao deterioramento da fixação do implante ao osso. E além disso, até o presente momento não foi possível atingir condições de processamento estáveis quando a quantidade de retentor espacial excede 50 vol. %. Entretanto, a literatura descreve que a melhor faixa de porosidade para implantes de titânio para coluna vertebral está entre 60 - 65 vol. %. Portanto, no presente estudo, duas abordagens foram conduzidas visando a produção de amostras altamente porosas através da combinação de MIM e SHM com o valor constante de retentor espacial de 70 vol. % e uma porosidade aberta na superfície. Na primeira abordagem, a quantidade ótima de retentor espacial foi investigada, para tal foram melhorados a homogeneização do feedstock e os parâmetros de processo com o propósito de permitir a injeção do feedstock. Na segunda abordagem, tratamento por plasma foi aplicado nas amostras antes da etapa final de sinterização. Ambas rotas resultaram na melhoria da estabilidade dimensional das amostras durante a extração térmica do ligante e sinterização, permitindo a sinterização de amostras de titânio altamente porosas sem deformação da estrutura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on a novel method to build a 3-D model of the above-water portion of icebergs using surface imaging. The goal is to work towards the automation of iceberg surveys, allowing an Autonomous Surface Craft (ASC) to acquire shape and size information. After collecting data and images, the core software algorithm is made up of three parts: occluding contour finding, volume intersection, and parameter estimation. A software module is designed that could be used on the ASC to perform automatic and fast processing of above-water surface image data to determine iceberg shape and size measurement and determination. The resolution of the method is calculated using data from the iceberg database of the Program of Energy Research and Development (PERD). The method was investigated using data from field trials conducted through the summer of 2014 by surveying 8 icebergs during 3 expeditions. The results were analyzed to determine iceberg characteristics. Limitations of this method are addressed including its accuracy. Surface imaging system and LIDAR system are developed to profile the above-water iceberg in 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the proposed method has a definite potential to be applied to a wider range of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the casting of metals, tundish flow, welding, converters, and other metal processing applications, the behaviour of the fluid surface is important. In aluminium alloys, for example, oxides formed on the surface may be drawn into the body of the melt where they act as faults in the solidified product affecting cast quality. For this reason, accurate description of wave behaviour, air entrapment, and other effects need to be modelled, in the presence of heat transfer and possibly phase change. The authors have developed a single-phase algorithm for modelling this problem. The Scalar Equation Algorithm (SEA) (see Refs. 1 and 2), enables the transport of the property discontinuity representing the free surface through a fixed grid. An extension of this method to unstructured mesh codes is presented here, together with validation. The new method employs a TVD flux limiter in conjunction with a ray-tracing algorithm, to ensure a sharp bound interface. Applications of the method are in the filling and emptying of mould cavities, with heat transfer and phase change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Compounds exhibiting antioxidant activity have received much interest in the food industry because of their potential health benefits. Carotenoids such as lycopene, which in the human diet mainly derives from tomatoes (Solanum lycopersicum), have attracted much attention in this aspect and the study of their extraction, processing and storage procedures is of importance. Optical techniques potentially offer advantageous non-invasive and specific methods to monitor them. Objectives To obtain both fluorescence and Raman information to ascertain if ultrasound assisted extraction from tomato pulp has a detrimental effect on lycopene. Method Use of time-resolved fluorescence spectroscopy to monitor carotenoids in a hexane extract obtained from tomato pulp with application of ultrasound treatment (583 kHz). The resultant spectra were a combination of scattering and fluorescence. Because of their different timescales, decay associated spectra could be used to separate fluorescence and Raman information. This simultaneous acquisition of two complementary techniques was coupled with a very high time-resolution fluorescence lifetime measurement of the lycopene. Results Spectroscopic data showed the presence of phytofluene and chlorophyll in addition to lycopene in the tomato extract. The time-resolved spectral measurement containing both fluorescence and Raman data, coupled with high resolution time-resolved measurements, where a lifetime of ~5 ps was attributed to lycopene, indicated lycopene appeared unaltered by ultrasound treatment. Detrimental changes were, however, observed in both chlorophyll and phytofluene contributions. Conclusion Extracted lycopene appeared unaffected by ultrasound treatment, while other constituents (chlorophyll and phytofluene) were degraded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In most countries along with various food products, fish sausage is supplied in different formulas. Unfortunately, in our country because of different reasons, production and supply of fish sausage in industrial level has not yet been successful and some efforts taken, has also been doomed to failure or not welcomed. Fat fish is a rich source of poly unsaturated fatty acids (PUFA) and co-3. In this research, efforts have been made to produce and enrich sausage with fish oil and maintenance of fatty acids has also been experimented using gas chromatography along with heating process. The stages of producing ground fish and fish sausage are as the following: Transferring and preparing fish, washing the cleared fish, filleting, separating fillet steak, washing and drying them, Refining meat, Producing and homogenizing mixture from basic ingredients in a cutter, filling, knotting and heat processing. The fish sausage produced by this method tried and welcomed by the subjects. In the product in which fish meat was used, the subjects was not recognized fish flavor and taste and when in addition to fish meat, fish oil was used during enrichment, the flavor and taste of fish was considered as highly acceptable. TVN measurement of the produced fish sausage was kept in the refrigerator in two month was at a maximum of 16.5, the amount of peroxide was at a maximum 1.5% after the period of two months. During this period the Colony count was at maximum of 19.5 x 104, the high maximum of the number of coliforms was 10/gr, and for mold and yeast 83/gr , but Escherichia coli, Staphylococcus aureus, Salmonella and Clostridium perfringens were not found. The protein of the resulting product was 15-18%, lipid at about 11-15% and moisture 60-65%. Comparing fatty acids, including unsaturated fatty acids in ground and oil fish used in producing fish sausage with those of fish sausage showed that the heat used in processing had the least effect on fatty acids of the meat and oil used here and the resulting fish sausage is considered as food for good health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring the extent to which a piece of structural timber has distorted at a macroscopic scale is fundamental to assessing its viability as a structural component. From the sawmill to the construction site, as structural timber dries, distortion can render it unsuitable for its intended purposes. This rejection of unusable timber is a considerable source of waste to the timber industry and the wider construction sector. As such, ensuring accurate measurement of distortion is a key step in addressing ineffciencies within timber processing. Currently, the FRITS frame method is the established approach used to gain an understanding of timber surface profile. The method, while reliable, is dependent upon relatively few measurements taken across a limited area of the overall surface, with a great deal of interpolation required. Further, the process is unavoidably slow and cumbersome, the immobile scanning equipment limiting where and when measurements can be taken and constricting the process as a whole. This thesis seeks to introduce LiDAR scanning as a new, alternative approach to distortion feature measurement. In its infancy as a measurement technique within timber research, the practicalities of using LiDAR scanning as a measurement method are herein demonstrated, exploiting many of the advantages the technology has over current approaches. LiDAR scanning creates a much more comprehensive image of a timber surface, generating input data multiple magnitudes larger than that of the FRITS frame. Set-up and scanning time for LiDAR is also much quicker and more flexible than existing methods. With LiDAR scanning the measurement process is freed from many of the constraints of the FRITS frame and can be done in almost any environment. For this thesis, surface scans were carried out on seven Sitka spruce samples of dimensions 48.5x102x3000mm using both the FRITS frame and LiDAR scanner. The samples used presented marked levels of distortion and were relatively free from knots. A computational measurement model was created to extract feature measurements from the raw LiDAR data, enabling an assessment of each piece of timber to be carried out in accordance with existing standards. Assessment of distortion features focused primarily on the measurement of twist due to its strong prevalence in spruce and the considerable concern it generates within the construction industry. Additional measurements of surface inclination and bow were also made with each method to further establish LiDAR's credentials as a viable alternative. Overall, feature measurements as generated by the new LiDAR method compared well with those of the established FRITS method. From these investigations recommendations were made to address inadequacies within existing measurement standards, namely their reliance on generalised and interpretative descriptions of distortion. The potential for further uses of LiDAR scanning within timber researches was also discussed.