993 resultados para Typical application


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Differential scanning calorimetry was used to investigate the effect of mixtures of glucose and fructose, and five types of honeys on starch gelatinisation. At a 1:1 starch:water ratio, glucose generally increased the enthalpy (DeltaH(gel)) and temperatures (T-onset, T-peak and T-end) of gelatinisation more than fructose. Upon mixing, DeltaH(gel) of the low-temperature endotherm decreased in comparison to the sole sugars, but was fairly constant (7.7 +/- 0.33 J/g dry starch). DeltaH(gel) of the high-temperature endotherm increased with the fructose content. For both endotherms, the gelatinisation temperatures were unchanged (CV less than or equal to 3%) for the mixtures. With the honeys (moisture, 14.9-18.0%; fructose, 37.2-44.0%; glucose, 28.3-31.9%) added at 1.1-4.4 g per g dry starch, the enthalpy and temperatures of gelatinisation did not vary significantly (CV less than or equal to 6%). Typical thermograms are presented, and the results are interpreted in the light of the various proposed mechanisms for starch gelatinisation in sugar-water systems, total sugar content and possible sugar-sugar interactions. The thermograms were broader in the presence of the sugars and honeys, and a biphasic character was consistently exhibited. The application of an exponential equation to the gelatinisation temperatures of the starch-honey mixtures revealed an opposing influence of fructose and glucose during gelatinisation. The mechanism of starch gelatinisation may be better understood if techniques could be perfected to quantify breakage and formation of hydrogen bonds in the starch granules, and suggested techniques are discussed. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rotating disk voltammetry is routinely used to study electrochemically driven enzyme catalysis because of the assumption that the method produces a steady-state system. This assumption is based on the sigmoidal shape of the voltammograms. We have introduced an electrochemical adaptation of the King-Altman method to simulate voltammograms in which the enzyme catalysis, within an immobilized enzyme layer, is steadystate. This method is readily adaptable to any mechanism and provides a readily programmable means of obtaining closed form analytical equations for a steady-state system. The steady-state simulations are compared to fully implicit finite difference (FIFD) simulations carried out without any steady-state assumptions. On the basis of our simulations, we conclude that, under typical experimental conditions, steady-state enzyme catalysis is unlikely to occur within electrode-immobilized enzyme layers and that typically sigmoidal rotating disk voltammograms merely reflect a mass transfer steady state as opposed to a true steady state of enzyme intermediates at each potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article first summarizes some available experimental results on the frictional behaviour of contact interfaces, and briefly recalls typical frictional experiments and relationships, which are applicable for rock mechanics, and then a unified description is obtained to describe the entire frictional behaviour. It is formulated based on the experimental results and applied with a stick and slip decomposition algorithm to describe the stick-slip instability phenomena, which can describe the effects observed in rock experiments without using the so-called state variable, thus avoiding related numerical difficulties. This has been implemented to our finite element code, which uses the node-to-point contact element strategy proposed by the authors to handle the frictional contact between multiple finite-deformation bodies with stick and finite frictional slip, and applied here to simulate the frictional behaviour of rocks to show its usefulness and efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of "typical set (pairs) decoding" for ensembles of Gallager's linear code is investigated using statistical physics. In this decoding method, errors occur, either when the information transmission is corrupted by atypical noise, or when multiple typical sequences satisfy the parity check equation as provided by the received corrupted codeword. We show that the average error rate for the second type of error over a given code ensemble can be accurately evaluated using the replica method, including the sensitivity to message length. Our approach generally improves the existing analysis known in the information theory community, which was recently reintroduced in IEEE Trans. Inf. Theory 45, 399 (1999), and is believed to be the most accurate to date. © 2002 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The typical behavior of the relay-without-delay channel under low-density parity-check coding and its multiple-unit generalization, termed the relay array, is studied using methods of statistical mechanics. A demodulate-and- forward strategy is analytically solved using the replica symmetric ansatz which is exact in the system studied at Nishimori's temperature. In particular, the typical level of improvement in communication performance by relaying messages is shown in the case of a small and a large number of relay units. © 2007 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using methods of statistical physics, we study the average number and kernel size of general sparse random matrices over GF(q), with a given connectivity profile, in the thermodynamical limit of large matrices. We introduce a mapping of GF(q) matrices onto spin systems using the representation of the cyclic group of order q as the q-th complex roots of unity. This representation facilitates the derivation of the average kernel size of random matrices using the replica approach, under the replica symmetric ansatz, resulting in saddle point equations for general connectivity distributions. Numerical solutions are then obtained for particular cases by population dynamics. Similar techniques also allow us to obtain an expression for the exact and average number of random matrices for any general connectivity profile. We present numerical results for particular distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of astrogliosis, or reactive gliosis, is a typical response of astrocytes to a wide range of physical and chemical injuries. The up-regulation of the astrocyte specific glial fibrillary acidic protein (GFAP) is a hallmark of reactive gliosis and is widely used as a marker to identify the response. In order to develop a reliable, sensitive and high throughput astrocyte toxicity assay that is more relevant to the human response than existing animal cell based models, the U251-MG, U373-MG and CCF-STTG 1 human astrocytoma cell lines were investigated for their ability to exhibit reactive-like changes following exposure to ethanol, chloroquine diphosphate, trimethyltin chloride and acrylamide. Cytotoxicity analysis showed that the astrocytic cells were generally more resistant to the cytotoxic effects of the agents than the SH-SY5Y neuroblastoma cells. Retinoic acid induced differentiation of the SH-SY5Y line was also seen to confer some degree of resistance to toxicant exposure, particularly in the case of ethanol. Using a cell based ELISA for GFAP together with concurrent assays for metabolic activity and cell number, each of the three cell lines responded to toxicant exposure by an increase in GFAP immunoreactivity (GFAP-IR), or by increased metabolic activity. Ethanol, chloroquine diphosphate, trimethyltin chloride and bacterial lipopolysaccharide all induced either GFAP or MTT increases depending upon the cell line, dose and exposure time. Preliminary investigations of additional aspects of astrocytic injury indicated that IL-6, but not TNF-α. or nitric oxide, is released following exposure to each of the compounds, with the exception of acrylamide. It is clear that these human astrocytoma cell lines are capable of responding to toxicant exposure in a manner typical of reactive gliosis and are therefore a valuable cellular model in the assessment of in vitro neurotoxicity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a general matrix formulation for multiuser channels and analyse the special cases of Multiple-Input Multiple-Output channels, channels with interference and relay arrays under LDPC coding using methods developed for the statistical mechanics of disordered systems. We use the replica method to provide results for the typical overlaps of the original and recovered messages and discuss their implications. The results obtained are consistent with belief propagation and density evolution results but also complement them giving additional insights into the information dynamics of these channels with unexpected effects in some cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adopting another’s visual perspective is exceedingly common and may underlie successful social interaction and empathizing with others. The individual differences responsible for success in perspective-taking, however, remain relatively undiscovered. We assessed whether gender and autistic personality traits in normal college student adults predict the ability to adopt another’s visual perspective. In a task differentially recruiting VPT-1 which involves following another’s line of sight, and VPT-2 which involves determining how another may perceive an object differently given their unique perspective (VPT-2), we found effects of both gender and autistic personality traits. Specifically, we demonstrate slowed VPT-2 but not VPT-1 performance in males and females with relatively high ASD-characteristic personality traits; this effect, however was markedly stronger in males than females. Results contribute to knowledge regarding ASD-related personality traits in the general population and the individual differences modulating perspective-taking abilities.