80 resultados para Space Extended Systems
Resumo:
Based on the Dempster-Shafer (D-S) theory of evidence and G. Yen's (1989), extension of the theory, the authors propose approaches to representing heuristic knowledge by evidential mapping and pooling the mass distribution in a complex frame by partitioning that frame using Shafter's partition technique. The authors have generalized Yen's model from Bayesian probability theory to the D-S theory of evidence. Based on such a generalized model, an extended framework for evidential reasoning systems is briefly specified in which a semi-graph method is used to describe the heuristic knowledge. The advantage of such a method is that it can avoid the complexity of graphs without losing the explicitness of graphs. The extended framework can be widely used to build expert systems
Resumo:
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems.
Resumo:
Coloured effluents from textile industries are a problem in many rivers and waterways. Prediction of adsorption capacities of dyes by adsorbents is important in design considerations. The sorption of three basic dyes, namely Basic Blue 3, Basic Yellow 21 and Basic Red 22, onto peat is reported. Equilibrium sorption isotherms have been measured for the three single component systems. Equilibrium was achieved after twenty-one days. The experimental isotherm data were analysed using Langmuir, Freundlich, Redlich-Peterson, Temkin and Toth isotherm equations. A detailed error analysis has been undertaken to investigate the effect of using different error criteria for the determination of the single component isotherm parameters and hence obtain the best isotherm and isotherm parameters which describe the adsorption process. The linear transform model provided the highest R2 regression coefficient with the Redlich-Peterson model. The Redlich-Peterson model also yielded the best fit to experimental data for all three dyes using the non-linear error functions. An extended Langmuir model has been used to predict the isotherm data for the binary systems using the single component data. The correlation between theoretical and experimental data had only limited success due to competitive and interactive effects between the dyes and the dye-surface interactions.
Resumo:
The fundamental controls on the initiation and development of gravel-dominated deposits (beaches and barriers) on paraglacial coasts are particle size and shape, sediment supply, storm wave activity (primarily runup), relative sea-level (RSL) change, and terrestrial basement structure (primarily as it affects accommodation space). This paper examines the stochastic basis for barrier organisation as shown by variation in gravel barrier architecture. We recognise punctuated self-organisation of barrier development that is disrupted by short phases of barrier instability. The latter results from positive feedback causing barrier breakdown when sediment supply is exhausted. We examine published typologies for gravel barriers and advocate a consolidated perspective using rate of RSL change and sediment supply. We also consider the temporal variation in controls on barrier development. These are examined in terms of a simple behavioural model (BARCH) for prograding gravel barrier architecture and its sensitivity to such controls. The nature of macroscale (102–103 years) gravel barrier development, including inherited characteristics that influence barrier genesis, as well as forcing from changing RSL, sediment supply, headland control and barrier inertia, is examined in the context of long-surviving barriers along the southern England coastline.
Resumo:
This paper theoretically analysis the recently proposed "Extended Partial Least Squares" (EPLS) algorithm. After pointing out some conceptual deficiencies, a revised algorithm is introduced that covers the middle ground between Partial Least Squares and Principal Component Analysis. It maximises a covariance criterion between a cause and an effect variable set (partial least squares) and allows a complete reconstruction of the recorded data (principal component analysis). The new and conceptually simpler EPLS algorithm has successfully been applied in detecting and diagnosing various fault conditions, where the original EPLS algorithm did only offer fault detection.
Resumo:
We establish a mapping between a continuous-variable (CV) quantum system and a discrete quantum system of arbitrary dimension. This opens up the general possibility to perform any quantum information task with a CV system as if it were a discrete system. The Einstein-Podolsky-Rosen state is mapped onto the maximally entangled state in any finite-dimensional Hilbert space and thus can be considered as a universal resource of entanglement. An explicit example of the map and a proposal for its experimental realization are discussed.
Resumo:
It is shown how the Debye rotational diffusion model of dielectric relaxation of polar molecules (which may be described in microscopic fashion as the diffusion limit of a discrete time random walk on the surface of the unit sphere) may be extended to yield the empirical Havriliak-Negami (HN) equation of anomalous dielectric relaxation from a microscopic model based on a kinetic equation just as in the Debye model. This kinetic equation is obtained by means of a generalization of the noninertial Fokker-Planck equation of conventional Brownian motion (generally known as the Smoluchowski equation) to fractional kinetics governed by the HN relaxation mechanism. For the simple case of noninteracting dipoles it may be solved by Fourier transform techniques to yield the Green function and the complex dielectric susceptibility corresponding to the HN anomalous relaxation mechanism.
Resumo:
We analyse a picture of transport in which two large but finite charged electrodes discharge across a nanoscale junction. We identify a functional whose minimization, within the space of all bound many-body wavefunctions, defines an instantaneous steady state. We also discuss factors that favour the onset of steady-state conduction in such systems, make a connection with the notion of entropy, and suggest a novel source of steady-state noise. Finally, we prove that the true many-body total current in this closed system is given exactly by the one-electron total current, obtained from time-dependent density-functional theory.
Resumo:
There is a perception that teaching space in universities is a rather scarce resource. However, some studies have revealed that in many institutions it is actually chronically under-used. Often, rooms are occupied only half the time, and even when in use they are often only half full. This is usually measured by the ‘utilization’ which is defined as the percentage of available ‘seat-hours’ that are employed. Within real institutions, studies have shown that this utilization can often take values as low as 20–40%. One consequence of such a low level of utilization is that space managers are under pressure to make more efficient use of the available teaching space. However, better management is hampered because there does not appear to be a good understanding within space management (near-term planning) of why this happens. This is accompanied, within space planning (long-term planning) by a lack of experise on how best to accommodate the expected low utilizations. This motivates our two main goals: (i) To understand the factors that drive down utilizations, (ii) To set up methods to provide better space planning. Here, we provide quantitative evidence that constraints arising from timetabling and location requirements easily have the potential to explain the low utilizations seen in reality. Furthermore, on considering the decision question ‘Can this given set of courses all be allocated in the available teaching space?’ we find that the answer depends on the associated utilization in a way that exhibits threshold behaviour: There is a sharp division between regions in which the answer is ‘almost always yes’ and those of ‘almost always no’. Through analysis and understanding of the space of potential solutions, our work suggests that better use of space within universities will come about through an understanding of the effects of timetabling constraints and when it is statistically likely that it will be possible for a set of courses to be allocated to a particular space. The results presented here provide a firm foundation for university managers to take decisions on how space should be managed and planned for more effectively. Our multi-criteria approach and new methodology together provide new insight into the interaction between the course timetabling problem and the crucial issue of space planning.
Resumo:
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.
Resumo:
The paper is primarily concerned with the modelling of aircraft manufacturing cost. The aim is to establish an integrated life cycle balanced design process through a systems engineering approach to interdisciplinary analysis and control. The cost modelling is achieved using the genetic causal approach that enforces product family categorisation and the subsequent generation of causal relationships between deterministic cost components and their design source. This utilises causal parametric cost drivers and the definition of the physical architecture from the Work Breakdown Structure (WBS) to identify product families. The paper presents applications to the overall aircraft design with a particular focus on the fuselage as a subsystem of the aircraft, including fuselage panels and localised detail, as well as engine nacelles. The higher level application to aircraft requirements and functional analysis is investigated and verified relative to life cycle design issues for the relationship between acquisition cost and Direct Operational Cost (DOC), for a range of both metal and composite subsystems. Maintenance is considered in some detail as an important contributor to DOC and life cycle cost. The lower level application to aircraft physical architecture is investigated and verified for the WBS of an engine nacelle, including a sequential build stage investigation of the materials, fabrication and assembly costs. The studies are then extended by investigating the acquisition cost of aircraft fuselages, including the recurring unit cost and the non-recurring design cost of the airframe sub-system. The systems costing methodology is facilitated by the genetic causal cost modeling technique as the latter is highly generic, interdisciplinary, flexible, multilevel and recursive in nature, and can be applied at the various analysis levels required of systems engineering. Therefore, the main contribution of paper is a methodology for applying systems engineering costing, supported by the genetic causal cost modeling approach, whether at a requirements, functional or physical level.
Resumo:
A novel acousto-optic spectrometer (IfU Diagnostic Systems GmbH) for 2-dimensional (2D) optical emission spectroscopy with high spectral resolution has been developed. The spectrometer is based on acousto-optic tuneable filter technology with fast random wavelength access. Measurements for characterisation of the imaging quality, the spatial resolution, and the spectral resolution are presented. The applicability for 2D-space and phase resolved optical emission spectroscopy (2D-PROES) is shown. 2D-PROES has been applied to an inductively coupled plasma with radio frequency excitation at 13.56 MHz.
Resumo:
A complex number lambda is called an extended eigenvalue of a bounded linear operator T on a Banach space B if there exists a non-zero bounded linear operator X acting on B such that XT = lambda TX. We show that there are compact quasinilpotent operators on a separable Hilbert space, for which the set of extended eigenvalues is the one-point set {1}.