26 resultados para threshold voltage model

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Particle breakage due to fluid flow through various geometries can have a major influence on the performance of particle/fluid processes and on the product quality characteristics of particle/fluid products. In this study, whey protein precipitate dispersions were used as a case study to investigate the effect of flow intensity and exposure time on the breakage of these precipitate particles. Computational fluid dynamic (CFD) simulations were performed to evaluate the turbulent eddy dissipation rate (TED) and associated exposure time along various flow geometries. The focus of this work is on the predictive modelling of particle breakage in particle/fluid systems. A number of breakage models were developed to relate TED and exposure time to particle breakage. The suitability of these breakage models was evaluated for their ability to predict the experimentally determined breakage of the whey protein precipitate particles. A "power-law threshold" breakage model was found to provide a satisfactory capability for predicting the breakage of the whey protein precipitate particles. The whey protein precipitate dispersions were propelled through a number of different geometries such as bends, tees and elbows, and the model accurately predicted the mean particle size attained after flow through these geometries. © 2005 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel real-time power-device temperature estimation method that monitors the power MOSFET's junction temperature shift arising from thermal aging effects and incorporates the updated electrothermal models of power modules into digital controllers. Currently, the real-time estimator is emerging as an important tool for active control of device junction temperature as well as online health monitoring for power electronic systems, but its thermal model fails to address the device's ongoing degradation. Because of a mismatch of coefficients of thermal expansion between layers of power devices, repetitive thermal cycling will cause cracks, voids, and even delamination within the device components, particularly in the solder and thermal grease layers. Consequently, the thermal resistance of power devices will increase, making it possible to use thermal resistance (and junction temperature) as key indicators for condition monitoring and control purposes. In this paper, the predicted device temperature via threshold voltage measurements is compared with the real-time estimated ones, and the difference is attributed to the aging of the device. The thermal models in digital controllers are frequently updated to correct the shift caused by thermal aging effects. Experimental results on three power MOSFETs confirm that the proposed methodologies are effective to incorporate the thermal aging effects in the power-device temperature estimator with good accuracy. The developed adaptive technologies can be applied to other power devices such as IGBTs and SiC MOSFETs, and have significant economic implications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the wake of the global financial crisis, several macroeconomic contributions have highlighted the risks of excessive credit expansion. In particular, too much finance can have a negative impact on growth. We examine the microeconomic foundations of this argument, positing a non-monotonic relationship between leverage and firm-level productivity growth in the spirit of the trade-off theory of capital structure. A threshold regression model estimated on a sample of Central and Eastern European countries confirms that TFP growth increases with leverage until the latter reaches a critical threshold beyond which leverage lowers TFP growth. This estimate can provide guidance to firms and policy makers on identifying "excessive" leverage. We find similar non-monotonic relationships between leverage and proxies for firm value. Our results are a first step in bridging the gap between the literature on optimal capital structure and the wider macro literature on the finance-growth nexus. © 2012 Elsevier Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model. © 2006 ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How does the brain combine spatio-temporal signals from the two eyes? We quantified binocular summation as the improvement in 2AFC contrast sensitivity for flickering gratings seen by two eyes compared with one. Binocular gratings in-phase showed sensitivity up to 1.8 times higher, suggesting nearly linear summation of contrasts. The binocular advantage decreased to 1.4 at lower spatial and higher temporal frequencies (0.25 cycle deg-1, 30 Hz). Dichoptic, antiphase gratings showed only a small binocular advantage, by a factor of 1.1 to 1.2, but no evidence of cancellation. We present a signal-processing model to account for the contrast-sensitivity functions and the pattern of binocular summation. It has linear sustained and transient temporal filters, nonlinear transduction, and half-wave rectification that creates ON and OFF channels. Binocular summation occurs separately within ON and OFF channels, thus explaining the phase-specific binocular advantage. The model also accounts for earlier findings on detection of brief antiphase flashes and the surprising finding that dichoptic antiphase flicker is seen as frequency-doubled (Cavonius et al, 1992 Ophthalmic and Physiological Optics 12 153 - 156). [Supported by EPSRC project GR/S74515/01].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contrast sensitivity is better with two eyes than one. The standard view is that thresholds are about 1.4 (v2) times better with two eyes, and that this arises from monocular responses that, near threshold, are proportional to the square of contrast, followed by binocular summation of the two monocular signals. However, estimates of the threshold ratio in the literature vary from about 1.2 to 1.9, and many early studies had methodological weaknesses. We collected extensive new data, and applied a general model of binocular summation to interpret the threshold ratio. We used horizontal gratings (0.25 - 4 cycles deg-1) flickering sinusoidally (1 - 16 Hz), presented to one or both eyes through frame-alternating ferroelectric goggles with negligible cross-talk, and used a 2AFC staircase method to estimate contrast thresholds and psychometric slopes. Four naive observers completed 20 000 trials each, and their mean threshold ratios were 1.63, 1.69, 1.71, 1.81 - grand mean 1.71 - well above the classical v2. Mean ratios tended to be slightly lower (~1.60) at low spatial or high temporal frequencies. We modelled contrast detection very simply by assuming a single binocular mechanism whose response is proportional to (Lm + Rm) p, followed by fixed additive noise, where L,R are contrasts in the left and right eyes, and m, p are constants. Contrast-gain-control effects were assumed to be negligible near threshold. On this model the threshold ratio is 2(?1/m), implying that m=1.3 on average, while the Weibull psychometric slope (median 3.28) equals 1.247mp, yielding p=2.0. Together, the model and data suggest that, at low contrasts across a wide spatiotemporal frequency range, monocular pathways are nearly linear in their contrast response (m close to 1), while a strongly accelerating nonlinearity (p=2, a 'soft threshold') occurs after binocular summation. [Supported by EPSRC project grant GR/S74515/01]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a simple model that captures the salient properties of distribution networks, and study the possible occurrence of blackouts, i.e., sudden failings of large portions of such networks. The model is defined on a random graph of finite connectivity. The nodes of the graph represent hubs of the network, while the edges of the graph represent the links of the distribution network. Both, the nodes and the edges carry dynamical two state variables representing the functioning or dysfunctional state of the node or link in question. We describe a dynamical process in which the breakdown of a link or node is triggered when the level of maintenance it receives falls below a given threshold. This form of dynamics can lead to situations of catastrophic breakdown, if levels of maintenance are themselves dependent on the functioning of the net, once maintenance levels locally fall below a critical threshold due to fluctuations. We formulate conditions under which such systems can be analyzed in terms of thermodynamic equilibrium techniques, and under these conditions derive a phase diagram characterizing the collective behavior of the system, given its model parameters. The phase diagram is confirmed qualitatively and quantitatively by simulations on explicit realizations of the graph, thus confirming the validity of our approach. © 2007 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the investigation of an adaptive method of attenuation control for digital speech signals in an analogue-digital environment and its effects on the transmission performance of a national telecommunication network. The first part gives the design of a digital automatic gain control, able to operate upon a P.C.M. signal in its companded form and whose operation is based upon the counting of peaks of the digital speech signal above certain threshold levels. A study was ma.de of a digital automatic gain control (d.a.g.c.) in open-loop configuration and closed-loop configuration. The former was adopted as the means for carrying out the automatic control of attenuation. It was simulated and tested, both objectively and subjectively. The final part is the assessment of the effects on telephone connections of a d.a.g.c. that introduces gains of 6 dB or 12 dB. This work used a Telephone Connection Assessment Model developed at The University of Aston in Birmingham. The subjective tests showed that the d.a.g.c. gives advantage for listeners when the speech level is very low. The benefit is not great when speech is only a little quieter than preferred. The assessment showed that, when a standard British Telecom earphone is used, insertion of gain is desirable if speech voltage across the earphone terminals is below an upper limit of -38 dBV. People commented upon the presence of an adaptive-like effect during the tests. This could be the reason why they voted against the insertion of gain at level only little quieter than preferred, when they may otherwise have judged it to be desirable. A telephone connection with a d.a.g.c. in has a degree of difficulty less than half of that without it. The score Excellent plus Good is 10-30% greater.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electrical and optical characteristics of a cylindrical alumina insulator (94% Al203) have been measured under ultra-high vacuum (P < 10-8 mBar) conditions. A high-resolution CCD camera was used to make real-time optical recordings of DC prebreakdown luminescence from the ceramic, under conditions where DC current magnitudes were limited to less than 50μA. Two concentric metallized rings formed a pair of co-axial electrodes, on the end-face of the alumina tube; a third 'transparent' electrode was employed to study the effect of an orthogonal electric field upon the radial conduction processes within the metallized alumina specimen. The wavelength-spectra of the emitted light was quantified using a high-speed scanning monochromator and photo-multiplier tube detector. Concurrent electrical measurements were made alongside the recording of optical-emission images. An observed time-dependence of the photon-emission is correlated with a time-variation observed in the DC current-voltage characteristics of the alumina. Optical images were also recorded of pulsed-field surface-flashover events on the alumina ceramic. An intensified high-speed video technique provided 1ms frames of surface-flashover events, whilst 100ns frames were achieved using an ultra high-speed fast-framing camera. By coupling this fast-frame camera to a digital storage oscilloscope, it was possible to establish a temporal correlation between the application of a voltage-pulse to the ceramic and the evolution of photonic emissions from the subsequent surface-flashover event. The electro-optical DC prebreakdown characteristics of the alumina are discussed in terms of solid-state photon-emission processes, that are believed to arise from radiative electron-recombination at vacancy-defects and substitutional impurity centres within the surface-layers of the ceramic. The physical nature of vacancy-defects within an alumina dielectric is extensively explored, with a particular focus placed upon the trapped electron energy-levels that may be present at these defect centres. Finally, consideration is given to the practical application of alumina in the trigger-ceramic of a sealed triggered vacuum gap (TVG) switch. For this purpose, a physical model describing the initiation of electrical breakdown within the TVG regime is proposed, and is based upon the explosive destabilisation of trapped charge within the alumina ceramic, triggering the onset of surface-flashover along the insulator. In the main-gap prebreakdown phase, it is suggested that the electrical-breakdown of the TVG is initiated by the low-field 'stripping' of prebreakdown electrons from vacancy-defects in the ceramic under the influence of an orthogonal main-gap electric field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on a theoretical study of an interferometric system in which half of a collimated beam from a broadband optical source is intercepted by a glass slide, the whole beam subsequently being incident on a diffraction grating and the resulting spectrum being viewed using a linear CCD array. Using Fourier theory, we derive the expression of the intensity distribution across the CCD array. This expression is then examined for non-cavity and cavity sources for different cases determined by the direction from which the slide is inserted into the beam and the source bandwidth. The theoretical model shows that the narrower the source linewidth, the higher the deviation of the Talbot bands' visibility (as it is dependent on the path imbalance) from the previously known triangular shape. When the source is a laser diode below threshold, the structure of the CCD signal spectrum is very complex. The number of components present simultaneously increases with the number of grating lines and decreases with the laser cavity length. The model also predicts the appearance of bands in situations not usually associated with Talbot bands.