953 resultados para dry method processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the casting of metals, tundish flow, welding, converters, and other metal processing applications, the behaviour of the fluid surface is important. In aluminium alloys, for example, oxides formed on the surface may be drawn into the body of the melt where they act as faults in the solidified product affecting cast quality. For this reason, accurate description of wave behaviour, air entrapment, and other effects need to be modelled, in the presence of heat transfer and possibly phase change. The authors have developed a single-phase algorithm for modelling this problem. The Scalar Equation Algorithm (SEA) (see Refs. 1 and 2), enables the transport of the property discontinuity representing the free surface through a fixed grid. An extension of this method to unstructured mesh codes is presented here, together with validation. The new method employs a TVD flux limiter in conjunction with a ray-tracing algorithm, to ensure a sharp bound interface. Applications of the method are in the filling and emptying of mould cavities, with heat transfer and phase change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water removal in paper manufacturing is an energy-intensive process. The dewatering process generally consists of four stages of which the first three stages include mechanical water removal through gravity filtration, vacuum dewatering and wet pressing. In the fourth stage, water is removed thermally, which is the most expensive stage in terms of energy use. In order to analyse water removal during a vacuum dewatering process, a numerical model was created by using a Level-Set method. Various different 2D structures of the paper model were created in MATLAB code with randomly positioned circular fibres with identical orientation. The model considers the influence of the forming fabric which supports the paper sheet during the dewatering process, by using volume forces to represent flow resistance in the momentum equation. The models were used to estimate the dry content of the porous structure for various dwell times. The relation between dry content and dwell time was compared to laboratory data for paper sheets with basis weights of 20 and 50 g/m2 exposed to vacuum levels between 20 kPa and 60 kPa. The comparison showed reasonable results for dewatering and air flow rates. The random positioning of the fibres influences the dewatering rate slightly. In order to achieve more accurate comparisons, the random orientation of the fibres needs to be considered, as well as the deformation and displacement of the fibres during the dewatering process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Compounds exhibiting antioxidant activity have received much interest in the food industry because of their potential health benefits. Carotenoids such as lycopene, which in the human diet mainly derives from tomatoes (Solanum lycopersicum), have attracted much attention in this aspect and the study of their extraction, processing and storage procedures is of importance. Optical techniques potentially offer advantageous non-invasive and specific methods to monitor them. Objectives To obtain both fluorescence and Raman information to ascertain if ultrasound assisted extraction from tomato pulp has a detrimental effect on lycopene. Method Use of time-resolved fluorescence spectroscopy to monitor carotenoids in a hexane extract obtained from tomato pulp with application of ultrasound treatment (583 kHz). The resultant spectra were a combination of scattering and fluorescence. Because of their different timescales, decay associated spectra could be used to separate fluorescence and Raman information. This simultaneous acquisition of two complementary techniques was coupled with a very high time-resolution fluorescence lifetime measurement of the lycopene. Results Spectroscopic data showed the presence of phytofluene and chlorophyll in addition to lycopene in the tomato extract. The time-resolved spectral measurement containing both fluorescence and Raman data, coupled with high resolution time-resolved measurements, where a lifetime of ~5 ps was attributed to lycopene, indicated lycopene appeared unaltered by ultrasound treatment. Detrimental changes were, however, observed in both chlorophyll and phytofluene contributions. Conclusion Extracted lycopene appeared unaffected by ultrasound treatment, while other constituents (chlorophyll and phytofluene) were degraded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jerked beef, an industrial meat product obtained from beef with the addition of sodium chloride and curing salts and subjected to a maturing and drying process is a typical Brazilian product which has been gradually discovered by the consumer. The replacement of synthetic antioxidants by natural substances with antioxidant potential due to possible side effects discovered by lab tests, consumer health, is being implemented by the meat industry. This study aimed to evaluate the lipid oxidation of jerked beef throughout the storage period by replacing the sodium nitrite by natural extracts of propolis and Yerba Mate. For jerked beef processing brisket was used as raw material processed in 6 different formulations: formulation 1 (control - in nature), formulation 2 (sodium nitrite - NO), formulation 3 (Yerba Mate - EM), formulation 4 (propolis extract - PRO), formulation 5 (sodium nitrite + Yerba Mate - MS + NO), formulation 6 (propolis extract + sodium nitrite - PRO + NO). The raw material was subjected to wet salting, dry salting (tombos), drying at 25°C, packaging and storage in BOD 25°C. Samples of each formulation were taken every 7 days for analysis of lipid oxidation by the TBARS method. In all formulations, were carried out analysis of chemical composition at time zero and sixty days of storage. The water activity analysis and color (L *, a *, b *) was monitored at time zero, thirty and sixty days of storage. The Salmonella spp count, Coliform bacteria, Termotolerant coliforms and coagulase positive staphylococci were taken at time zero and sixty days. The activity of natural antioxidants evaluated shows the decline of lipid oxidation up to 2.5 times compared with the product in natura and presented values with no significant differences between treatments NO and EM, confirming the potential in minimize lipid oxidation of Jerked beef throughout the 60 days of storage. The results also showed that yerba mate has a higher antioxidant capacity compared to the propolis except the PRO + NO formulation. When associated with yerba mate with sodium nitrate, TBARS values become close to values obtained only for the control samples with the addition of sodium nitrite. The proximal composition of the formulations remained within the standards required in the IN nº22/2000 for jerked beef. Samples that differ significantly at 5% are directly related to the established type of formulation. The count of microorganisms was within the standards of the DRC nº12/2001 required for matured meat products. The intensity of the red (a*) decreased with storage time and increase the intensity of yellow (b*) indicates a darkening of the product despite L* also have been increased. These results suggest that yerba mate is a good alternative to meat industry in reducing healing addition salts when associated with another antioxidant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In most countries along with various food products, fish sausage is supplied in different formulas. Unfortunately, in our country because of different reasons, production and supply of fish sausage in industrial level has not yet been successful and some efforts taken, has also been doomed to failure or not welcomed. Fat fish is a rich source of poly unsaturated fatty acids (PUFA) and co-3. In this research, efforts have been made to produce and enrich sausage with fish oil and maintenance of fatty acids has also been experimented using gas chromatography along with heating process. The stages of producing ground fish and fish sausage are as the following: Transferring and preparing fish, washing the cleared fish, filleting, separating fillet steak, washing and drying them, Refining meat, Producing and homogenizing mixture from basic ingredients in a cutter, filling, knotting and heat processing. The fish sausage produced by this method tried and welcomed by the subjects. In the product in which fish meat was used, the subjects was not recognized fish flavor and taste and when in addition to fish meat, fish oil was used during enrichment, the flavor and taste of fish was considered as highly acceptable. TVN measurement of the produced fish sausage was kept in the refrigerator in two month was at a maximum of 16.5, the amount of peroxide was at a maximum 1.5% after the period of two months. During this period the Colony count was at maximum of 19.5 x 104, the high maximum of the number of coliforms was 10/gr, and for mold and yeast 83/gr , but Escherichia coli, Staphylococcus aureus, Salmonella and Clostridium perfringens were not found. The protein of the resulting product was 15-18%, lipid at about 11-15% and moisture 60-65%. Comparing fatty acids, including unsaturated fatty acids in ground and oil fish used in producing fish sausage with those of fish sausage showed that the heat used in processing had the least effect on fatty acids of the meat and oil used here and the resulting fish sausage is considered as food for good health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring the extent to which a piece of structural timber has distorted at a macroscopic scale is fundamental to assessing its viability as a structural component. From the sawmill to the construction site, as structural timber dries, distortion can render it unsuitable for its intended purposes. This rejection of unusable timber is a considerable source of waste to the timber industry and the wider construction sector. As such, ensuring accurate measurement of distortion is a key step in addressing ineffciencies within timber processing. Currently, the FRITS frame method is the established approach used to gain an understanding of timber surface profile. The method, while reliable, is dependent upon relatively few measurements taken across a limited area of the overall surface, with a great deal of interpolation required. Further, the process is unavoidably slow and cumbersome, the immobile scanning equipment limiting where and when measurements can be taken and constricting the process as a whole. This thesis seeks to introduce LiDAR scanning as a new, alternative approach to distortion feature measurement. In its infancy as a measurement technique within timber research, the practicalities of using LiDAR scanning as a measurement method are herein demonstrated, exploiting many of the advantages the technology has over current approaches. LiDAR scanning creates a much more comprehensive image of a timber surface, generating input data multiple magnitudes larger than that of the FRITS frame. Set-up and scanning time for LiDAR is also much quicker and more flexible than existing methods. With LiDAR scanning the measurement process is freed from many of the constraints of the FRITS frame and can be done in almost any environment. For this thesis, surface scans were carried out on seven Sitka spruce samples of dimensions 48.5x102x3000mm using both the FRITS frame and LiDAR scanner. The samples used presented marked levels of distortion and were relatively free from knots. A computational measurement model was created to extract feature measurements from the raw LiDAR data, enabling an assessment of each piece of timber to be carried out in accordance with existing standards. Assessment of distortion features focused primarily on the measurement of twist due to its strong prevalence in spruce and the considerable concern it generates within the construction industry. Additional measurements of surface inclination and bow were also made with each method to further establish LiDAR's credentials as a viable alternative. Overall, feature measurements as generated by the new LiDAR method compared well with those of the established FRITS method. From these investigations recommendations were made to address inadequacies within existing measurement standards, namely their reliance on generalised and interpretative descriptions of distortion. The potential for further uses of LiDAR scanning within timber researches was also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an adaptive mesh refinement strategy based on exploiting a combination of a pre-processing mesh re-distribution algorithm employing a harmonic mapping technique, and standard (isotropic) mesh subdivision for discontinuous Galerkin approximations of advection-diffusion problems. Numerical experiments indicate that the resulting adaptive strategy can efficiently reduce the computed discretization error by clustering the nodes in the computational mesh where the analytical solution undergoes rapid variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated cow characteristics, farm facilities, and herd management strategies during the dry period to examine their joint influence on somatic cell counts (SCC) in early lactation. Data from 52 commercial dairy farms throughout England and Wales were collected over a 2-yr period. For the purpose of analysis, cows were separated into those housed for the dry period (6,419 cow-dry periods) and those at pasture (7,425 cow-dry periods). Bayesian multilevel models were specified with 2 response variables: ln SCC (continuous) and SCC >199,000 cells/mL (binary), both within 30 d of calving. Cow factors associated with an increased SCC after calving were parity, an SCC >199,000 cells/mL in the 60 d before drying off, increasing milk yield 0 to 30 d before drying off, and reduced DIM after calving at the time of SCC estimation. Herd management factors associated with an increased SCC after calving included procedures at drying off, aspects of bedding management, stocking density, and method of pasture grazing. Posterior predictions were used for model assessment, and these indicated that model fit was generally good. The research demonstrated that specific dry-period management strategies have an important influence on SCC in early lactation.