997 resultados para Error diffusion
Resumo:
Diffusion is the process that leads to the mixing of substances as a result of spontaneous and random thermal motion of individual atoms and molecules. It was first detected by the English botanist Robert Brown in 1827, and the phenomenon became known as ‘Brownian motion’. More specifically, the motion observed by Brown was translational diffusion – thermal motion resulting in random variations of the position of a molecule. This type of motion was given a correct theoretical interpretation in 1905 by Albert Einstein, who derived the relationship between temperature, the viscosity of the medium, the size of the diffusing molecule, and its diffusion coefficient. It is translational diffusion that is indirectly observed in MR diffusion-tensor imaging (DTI). The relationship obtained by Einstein provides the physical basis for using translational diffusion to probe the microscopic environment surrounding the molecule.
Resumo:
Manufacturing organisations spend more on Business Process Improvement initiatives to make them more competitive in growing global market. This paper presents a Rapid Improvement Workshop (RIW) framework which companies can used to identify the critical factors regulating the diffusion of business process improvement in their company. The framework can then be used address how process improvement can be efficiently implemented. We use the results from case studies at Caterpillar India. The paper identifies the critical factors that contribute to the successful implementation of process improvement programs in manufacturing organisations. We further identify certain technological and cultural barriers to the implementation of process improvement programs and how Indian manufacturing companies can overcome these barriers to attain competitive advantage in the global markets.
Resumo:
In many product categories of durable goods such as TV, PC, and DVD player, the largest component of sales is generated by consumers replacing existing units. Aggregate sales models proposed by diffusion of innovation researchers for the replacement component of sales have incorporated several different replacement distributions such as Rayleigh, Weibull, Truncated Normal and Gamma. Although these alternative replacement distributions have been tested using both time series sales data and individual-level actuarial “life-tables” of replacement ages, there is no census on which distributions are more appropriate to model replacement behaviour. In the current study we are motivated to develop a new “modified gamma” distribution by two reasons. First we recognise that replacements have two fundamentally different drivers – those forced by failure and early, discretionary replacements. The replacement distribution for each of these drivers is expected to be quite different. Second, we observed a poor fit of other distributions to out empirical data. We conducted a survey of 8,077 households to empirically examine models of replacement sales for six electronic consumer durables – TVs, VCRs, DVD players, digital cameras, personal and notebook computers. This data allows us to construct individual-level “life-tables” for replacement ages. We demonstrate the new modified gamma model fits the empirical data better than existing models for all six products using both a primary and a hold-out sample.
Resumo:
As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.
Resumo:
Gay community media functions as a system with three nodes, in which the flows of information and capital theoretically benefit all parties: the gay community gains a sense of cohesion and citizenship through media; the gay media outlets profit from advertisers’ capital; and advertisers recoup their investments in lucrative ‘pink dollar’ revenue. But if a necessary corollary of all communication systems is error or noise, where—and what—are the errors in this system? In this paper we argue that the ‘error’ in the gay media system is Queerness, and that the gay media system ejects (in a process of Kristevan abjection) these Queer identities in order to function successfully. We examine the ways in which Queer identities are excluded from representation in such media through a discourse and content analysis of The Sydney Star Observer (Australia’s largest gay and lesbian paper). First, we analyse the way Queer bodies are excluded from the discourses that construct and reinforce both the ideal gay male body and the notions of homosexual essence required for that body to be meaningful. We then argue that abject Queerness returns in the SSO’s discourses of public health through the conspicuous absence of the AIDS-inflicted body (which we read as the epitome of the abject Queer), since this absence paradoxically conjures up a trace of that which the system tries to expel. We conclude by arguing that because the ‘Queer error’ is integral to the SSO, gay community media should practise a politics of Queer inclusion rather than exclusion.
Resumo:
The electron collection efficiency in dye-sensitized solar cells (DSCs) is usually related to the electron diffusion length, L = (Dτ)1/2, where D is the diffusion coefficient of mobile electrons and τ is their lifetime, which is determined by electron transfer to the redox electrolyte. Analysis of incident photon-to-current efficiency (IPCE) spectra for front and rear illumination consistently gives smaller values of L than those derived from small amplitude methods. We show that the IPCE analysis is incorrect if recombination is not first-order in free electron concentration, and we demonstrate that the intensity dependence of the apparent L derived by first-order analysis of IPCE measurements and the voltage dependence of L derived from perturbation experiments can be fitted using the same reaction order, γ ≈ 0.8. The new analysis presented in this letter resolves the controversy over why L values derived from small amplitude methods are larger than those obtained from IPCE data.
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
Regardless of technology benefits, safety planners still face difficulties explaining errors related to the use of different technologies and evaluating how the errors impact the performance of safety decision making. This paper presents a preliminary error impact analysis testbed to model object identification and tracking errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. More specifically, this research designed a testbed to model workspaces for earthmoving operations, to simulate safety-related violations, and to apply different object identification and tracking errors on the data collected and processed for spatial safety assessment. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of the errors were investigated for the safety planning purpose.
Resumo:
An existing model for solvent penetration and drug release from a spherically-shaped polymeric drug delivery device is revisited. The model has two moving boundaries, one that describes the interface between the glassy and rubbery states of polymer, and another that defines the interface between the polymer ball and the pool of solvent. The model is extended so that the nonlinear diffusion coefficient of drug explicitly depends on the concentration of solvent, and the resulting equations are solved numerically using a front-fixing transformation together with a finite difference spatial discretisation and the method of lines. We present evidence that our scheme is much more accurate than a previous scheme. Asymptotic results in the small-time limit are presented, which show how the use of a kinetic law as a boundary condition on the innermost moving boundary dictates qualitative behaviour, the scalings being very different to the similar moving boundary problem that arises from modelling the melting of an ice ball. The implication is that the model considered here exhibits what is referred to as ``non-Fickian'' or Case II diffusion which, together with the initially constant rate of drug release, has certain appeal from a pharmaceutical perspective.
Resumo:
In this paper, we consider the variable-order Galilei advection diffusion equation with a nonlinear source term. A numerical scheme with first order temporal accuracy and second order spatial accuracy is developed to simulate the equation. The stability and convergence of the numerical scheme are analyzed. Besides, another numerical scheme for improving temporal accuracy is also developed. Finally, some numerical examples are given and the results demonstrate the effectiveness of theoretical analysis. Keywords: The variable-order Galilei invariant advection diffusion equation with a nonlinear source term; The variable-order Riemann–Liouville fractional partial derivative; Stability; Convergence; Numerical scheme improving temporal accuracy