778 resultados para labour process theory
Resumo:
We demonstrate how a prior assumption of smoothness can be used to enhance the reconstruction of free energy profiles from multiple umbrella sampling simulations using the Bayesian Gaussian process regression approach. The method we derive allows the concurrent use of histograms and free energy gradients and can easily be extended to include further data. In Part I we review the necessary theory and test the method for one collective variable. We demonstrate improved performance with respect to the weighted histogram analysis method and obtain meaningful error bars without any significant additional computation. In Part II we consider the case of multiple collective variables and compare to a reconstruction using least squares fitting of radial basis functions. We find substantial improvements in the regimes of spatially sparse data or short sampling trajectories. A software implementation is made available on www.libatoms.org.
Resumo:
An accurate description of atomic interactions, such as that provided by first principles quantum mechanics, is fundamental to realistic prediction of the properties that govern plasticity, fracture or crack propagation in metals. However, the computational complexity associated with modern schemes explicitly based on quantum mechanics limits their applications to systems of a few hundreds of atoms at most. This thesis investigates the application of the Gaussian Approximation Potential (GAP) scheme to atomistic modelling of tungsten - a bcc transition metal which exhibits a brittle-to-ductile transition and whose plasticity behaviour is controlled by the properties of $\frac{1}{2} \langle 111 \rangle$ screw dislocations. We apply Gaussian process regression to interpolate the quantum-mechanical (QM) potential energy surface from a set of points in atomic configuration space. Our training data is based on QM information that is computed directly using density functional theory (DFT). To perform the fitting, we represent atomic environments using a set of rotationally, permutationally and reflection invariant parameters which act as the independent variables in our equations of non-parametric, non-linear regression. We develop a protocol for generating GAP models capable of describing lattice defects in metals by building a series of interatomic potentials for tungsten. We then demonstrate that a GAP potential based on a Smooth Overlap of Atomic Positions (SOAP) covariance function provides a description of the $\frac{1}{2} \langle 111 \rangle$ screw dislocation that is in agreement with the DFT model. We use this potential to simulate the mobility of $\frac{1}{2} \langle 111 \rangle$ screw dislocations by computing the Peierls barrier and model dislocation-vacancy interactions to QM accuracy in a system containing more than 100,000 atoms.
Resumo:
In practical situations, the causes of image blurring are often undiscovered or difficult to get known. However, traditional methods usually assume the knowledge of the blur has been known prior to the restoring process, which are not practicable for blind image restoration. A new method proposed in this paper aims exactly at blind image restoration. The restoration process is transformed into a problem of point distribution analysis in high-dimensional space. Experiments have proved that the restoration could be achieved using this method without re-knowledge of the image blur. In addition, the algorithm guarantees to be convergent and has simple computation.
Resumo:
Fourth-order spatial interference of entangled photon pairs generated in the process of spontaneous parametric down-conversion pumped by a femtosecond pulse laser has been performed for the first time. In theory, it takes into account the transverse correlation between the two photons and is used to calculate the dependence of the visibility of the interference pattern obtained in Young's double-slit experiment. In this experiment, a short focal length tens and two narrow band interference filters were adopted to eliminate the effects of the broadband pump laser and improve the visibility of the interference pattern under the condition of nearly collinear light and degenerate phase matching.
Resumo:
The dissociation process of gas hydrate was regarded as a gas-solid reaction without solid production layer when the temperature was above the zero centigrade. Based on the shrinking core model and the fractal theory, a fractional dimension dynamical model for gas hydrate dissociation in porous sediment was established. The new approach of evaluating the fractal dimension of the porous media was also presented. The fractional dimension dynamical model for gas hydrate dissociation was examined with the previous experimental data of methane hydrate and carbon dioxide hydrate dissociations, respectively. The calculated results indicate that the fractal dimensions of porous media acquired with this method agree well with the previous study. With the absolute average deviation (AAD) below 10%, the present model provided satisfactory predictions for the dissociation process of methane hydrate and carbon dioxide hydrate.
Resumo:
Bloch modes can be excited in planar array due to its periodic lateral refractive index. The power coupled into each eigenmode of the array waveguides is calculated through the overlap integrals of the input field with the eigenmode fields of the coupled infinite array waveguides projected onto the x-axis. Low losses can be obtained if the transition from the array to the free propagation region is adiabatic. Due to the finite resolution of lithographic process the gap between the waveguides will stop abruptly, however, when the waveguides come into too close together. Calculation results show that losses will occur at this discontinuity, which are dependent on the ratio of the gap between the waveguides and grating pitch and on the confinement of field in the array waveguides. Tapered waveguides and low index contrast between the core and cladding layers can lower the transmitted losses.
Resumo:
The direct Coulomb ionization process can be generally well described by the ECPSSR theory, which bases on the perturbed-stationary- state(PSS) and accounts for the energy-loss, Coulomb-deflection, and relativistic effects. But the ECPSSR calculation has significant deviations for heavy projectile at low impinging energies. In this paper we propose a new modified ECPSSR theory, i.e. MECUSAR, in which PSS is replaced by an united and separated atom model, and molecule-orbit effect is considered. The MECUSAR calculations give better agreement with the experimental data at lower impinging energies, and agree with the ECPSSR calculations at high energies. By using OBKN (Oppenheimer-Brinkman-Kramers formulas of Nikolaev) theory to describe the contribution of the electron capture, we further modified the proposed MECUSAR theory, and calculated the target ionization cross sections for different charge states of the projectile.
Resumo:
The aim of this paper is to show that Dempster-Shafer evidence theory may be successfully applied to unsupervised classification in multisource remote sensing. Dempster-Shafer formulation allows for consideration of unions of classes, and to represent both imprecision and uncertainty, through the definition of belief and plausibility functions. These two functions, derived from mass function, are generally chosen in a supervised way. In this paper, the authors describe an unsupervised method, based on the comparison of monosource classification results, to select the classes necessary for Dempster-Shafer evidence combination and to define their mass functions. Data fusion is then performed, discarding invalid clusters (e.g. corresponding to conflicting information) thank to an iterative process. Unsupervised multisource classification algorithm is applied to MAC-Europe'91 multisensor airborne campaign data collected over the Orgeval French site. Classification results using different combinations of sensors (TMS and AirSAR) or wavelengths (L- and C-bands) are compared. Performance of data fusion is evaluated in terms of identification of land cover types. The best results are obtained when all three data sets are used. Furthermore, some other combinations of data are tried, and their ability to discriminate between the different land cover types is quantified
Resumo:
By incorporating self-consistent field theory with lattice Boltzmann method, a model for polymer melts is proposed. Compared with models based on Ginzburg-Landau free energy, our model does not employ phenomenological free energies to describe systems and can consider the chain topological details of polymers. We use this model to study the effects of hydrodynamic interactions on the dynamics of microphase separation for block copolymers. In the early stage of phase separation, an exponential growth predicted by Cahn-Hilliard treatment is found. Simulation results also show that the effect of hydrodynamic interactions can be neglected in the early stage.
Resumo:
Recently a debate about the initial crystallization process which has not been the hotspot for a long time since the theory proposed by Hoffman- Lauritzen (LH) dominated the field arose again. For a long time the Hoffman-Lauritzen model was always confronted by criticism,and some of the points were taken up and led to modifications, but the foundation remained unchanged which deemed that before the nucleation and crystallization the system was uniform. In this article the classical nucleation and growth theory of polymer crystallization was reviewed, and the confusion of the explanations to the polymer crystallization phenomenon was pointed out. LH theory assumes that the growth of lamellae is by the direct attachment of chain sequences from the melt onto smooth lateral sides.
Resumo:
The effects of hydrodynamic interactions on the lamellar ordering process for two-dimensional quenched block copolymers in the presence of extended defects and the topological defect evolutions in lamellar ordering process are numerically investigated by means of a model based on lattice Boltzmann method and self-consistent field theory. By observing the evolution of the average size of domains, it is found that the domain growth is faster with stronger hydrodynamic effects. The morphological patterns formed also appear different. To study the defect evolution, a defect density is defined and is used to explore the defect evolutions in lamellar ordering process. Our simulation results show that the hydrodynamics effects can reduce the density of defects. With our model, the relations between the Flory-Huggins interaction parameter chi, the length of the polymer chains N, and the defect evolutions are studied.
Resumo:
Handwriting production is viewed as a constrained modulation of an underlying oscillatory process. Coupled oscillations in horizontal and vertical directions produce letter forms, and when superimposed on a rightward constant velocity horizontal sweep result in spatially separated letters. Modulation of the vertical oscillation is responsible for control of letter height, either through altering the frequency or altering the acceleration amplitude. Modulation of the horizontal oscillation is responsible for control of corner shape through altering phase or amplitude. The vertical velocity zero crossing in the velocity space diagram is important from the standpoint of control. Changing the horizontal velocity value at this zero crossing controls corner shape, and such changes can be effected through modifying the horizontal oscillation amplitude and phase. Changing the slope at this zero crossing controls writing slant; this slope depends on the horizontal and vertical velocity zero amplitudes and on the relative phase difference. Letter height modulation is also best applied at the vertical velocity zero crossing to preserve an even baseline. The corner shape and slant constraints completely determine the amplitude and phase relations between the two oscillations. Under these constraints interletter separation is not an independent parameter. This theory applies generally to a number of acceleration oscillation patterns such as sinusoidal, rectangular and trapezoidal oscillations. The oscillation theory also provides an explanation for how handwriting might degenerate with speed. An implementation of the theory in the context of the spring muscle model is developed. Here sinusoidal oscillations arise from a purely mechanical sources; orthogonal antagonistic spring pairs generate particular cycloids depending on the initial conditions. Modulating between cycloids can be achieved by changing the spring zero settings at the appropriate times. Frequency can be modulated either by shifting between coactivation and alternating activation of the antagonistic springs or by presuming variable spring constant springs. An acceleration and position measuring apparatus was developed for measurements of human handwriting. Measurements of human writing are consistent with the oscillation theory. It is shown that the minimum energy movement for the spring muscle is bang-coast-bang. For certain parameter values a singular arc solution can be shown to be minimizing. Experimental measurements however indicate that handwriting is not a minimum energy movement.
Resumo:
This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: and element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limted set of associations between it and other elements from teh discourse as well as from general knowledge.
Resumo:
This report describes the implementation of a theory of edge detection, proposed by Marr and Hildreth (1979). According to this theory, the image is first processed independently through a set of different size filters, whose shape is the Laplacian of a Gaussian, ***. Zero-crossings in the output of these filters mark the positions of intensity changes at different resolutions. Information about these zero-crossings is then used for deriving a full symbolic description of changes in intensity in the image, called the raw primal sketch. The theory is closely tied with early processing in the human visual systems. In this report, we first examine the critical properties of the initial filters used in the edge detection process, both from a theoretical and practical standpoint. The implementation is then used as a test bed for exploring aspects of the human visual system; in particular, acuity and hyperacuity. Finally, we present some preliminary results concerning the relationship between zero-crossings detected at different resolutions, and some observations relevant to the process by which the human visual system integrates descriptions of intensity changes obtained at different resolutions.