928 resultados para essence of operation
Resumo:
Die vorliegende Arbeit behandelt Restartautomaten und Erweiterungen von Restartautomaten. Restartautomaten sind ein Werkzeug zum Erkennen formaler Sprachen. Sie sind motiviert durch die linguistische Methode der Analyse durch Reduktion und wurden 1995 von Jancar, Mráz, Plátek und Vogel eingeführt. Restartautomaten bestehen aus einer endlichen Kontrolle, einem Lese/Schreibfenster fester Größe und einem flexiblen Band. Anfänglich enthält dieses sowohl die Eingabe als auch Bandbegrenzungssymbole. Die Berechnung eines Restartautomaten läuft in so genannten Zyklen ab. Diese beginnen am linken Rand im Startzustand, in ihnen wird eine lokale Ersetzung auf dem Band durchgeführt und sie enden mit einem Neustart, bei dem das Lese/Schreibfenster wieder an den linken Rand bewegt wird und der Startzustand wieder eingenommen wird. Die vorliegende Arbeit beschäftigt sich hauptsächlich mit zwei Erweiterungen der Restartautomaten: CD-Systeme von Restartautomaten und nichtvergessende Restartautomaten. Nichtvergessende Restartautomaten können einen Zyklus in einem beliebigen Zustand beenden und CD-Systeme von Restartautomaten bestehen aus einer Menge von Restartautomaten, die zusammen die Eingabe verarbeiten. Dabei wird ihre Zusammenarbeit durch einen Operationsmodus, ähnlich wie bei CD-Grammatik Systemen, geregelt. Für beide Erweiterungen zeigt sich, dass die deterministischen Modelle mächtiger sind als deterministische Standardrestartautomaten. Es wird gezeigt, dass CD-Systeme von Restartautomaten in vielen Fällen durch nichtvergessende Restartautomaten simuliert werden können und andererseits lassen sich auch nichtvergessende Restartautomaten durch CD-Systeme von Restartautomaten simulieren. Des Weiteren werden Restartautomaten und nichtvergessende Restartautomaten untersucht, die nichtdeterministisch sind, aber keine Fehler machen. Es zeigt sich, dass diese Automaten durch deterministische (nichtvergessende) Restartautomaten simuliert werden können, wenn sie direkt nach der Ersetzung einen neuen Zyklus beginnen, oder ihr Fenster nach links und rechts bewegen können. Außerdem gilt, dass alle (nichtvergessenden) Restartautomaten, die zwar Fehler machen dürfen, diese aber nach endlich vielen Zyklen erkennen, durch (nichtvergessende) Restartautomaten simuliert werden können, die keine Fehler machen. Ein weiteres wichtiges Resultat besagt, dass die deterministischen monotonen nichtvergessenden Restartautomaten mit Hilfssymbolen, die direkt nach dem Ersetzungsschritt den Zyklus beenden, genau die deterministischen kontextfreien Sprachen erkennen, wohingegen die deterministischen monotonen nichtvergessenden Restartautomaten mit Hilfssymbolen ohne diese Einschränkung echt mehr, nämlich die links-rechts regulären Sprachen, erkennen. Damit werden zum ersten Mal Restartautomaten mit Hilfssymbolen, die direkt nach dem Ersetzungsschritt ihren Zyklus beenden, von Restartautomaten desselben Typs ohne diese Einschränkung getrennt. Besonders erwähnenswert ist hierbei, dass beide Automatentypen wohlbekannte Sprachklassen beschreiben.
Resumo:
The nonforgetting restarting automaton is a generalization of the restarting automaton that, when executing a restart operation, changes its internal state based on the current state and the actual contents of its read/write window instead of resetting it to the initial state. Another generalization of the restarting automaton is the cooperating distributed system (CD-system) of restarting automata. Here a finite system of restarting automata works together in analyzing a given sentence, where they interact based on a given mode of operation. As it turned out, CD-systems of restarting automata of some type X working in mode =1 are just as expressive as nonforgetting restarting automata of the same type X. Further, various types of determinism have been introduced for CD-systems of restarting automata called strict determinism, global determinism, and local determinism, and it has been shown that globally deterministic CD-systems working in mode =1 correspond to deterministic nonforgetting restarting automata. Here we derive some lower bound results for some types of nonforgetting restarting automata and for some types of CD-systems of restarting automata. In this way we establish separations between the corresponding language classes, thus providing detailed technical proofs for some of the separation results announced in the literature.
Resumo:
We introduce a new mode of operation for CD-systems of restarting automata by providing explicit enable and disable conditions in the form of regular constraints. We show that, for each CD-system M of restarting automata and each mode m of operation considered by Messerschmidt and Otto, there exists a CD-system M' of restarting automata of the same type as M that, working in the new mode ed, accepts the language that M accepts in mode m. Further, we prove that in mode ed, a locally deterministic CD-system of restarting automata of type RR(W)(W) can be simulated by a locally deterministic CD-system of restarting automata of the more restricted type R(W)(W). This is the first time that a non-monotone type of R-automaton without auxiliary symbols is shown to be as expressive as the corresponding type of RR-automaton.
Resumo:
Understanding and predicting changes in storm tracks over longer time scales is a challenging problem, particularly in the North Atlantic. This is due in part to the complex range of forcings (land–sea contrast, orography, sea surface temperatures, etc.) that combine to produce the structure of the storm track. The impact of land–sea contrast and midlatitude orography on the North Atlantic storm track is investigated through a hierarchy of GCM simulations using idealized and “semirealistic” boundary conditions in a high-resolution version of the Hadley Centre atmosphere model (HadAM3). This framework captures the large-scale essence of features such as the North and South American continents, Eurasia, and the Rocky Mountains, enabling the results to be applied more directly to realistic modeling situations than was possible with previous idealized studies. The physical processes by which the forcing mechanisms impact the large-scale flow and the midlatitude storm tracks are discussed. The characteristics of the North American continent are found to be very important in generating the structure of the North Atlantic storm track. In particular, the southwest–northeast tilt in the upper tropospheric jet produced by southward deflection of the westerly flow incident on the Rocky Mountains leads to enhanced storm development along an axis close to that of the continent’s eastern coastline. The approximately triangular shape of North America also enables a cold pool of air to develop in the northeast, intensifying the surface temperature contrast across the eastern coastline, consistent with further enhancements of baroclinicity and storm growth along the same axis.
Resumo:
The role of convective processes in moistening the atmosphere during suppressed periods of the suppressed phase of a Madden-Julian oscillation is investigated in cloud-resolving model (CRM) simulations, and the impact of moistening on the subsequent evolution of convection is assessed as part of a Global Energy and Water Cycle Experiment Cloud System Study (GCSS) intercomparison project. The ability of single-column model (SCM) versions of a number of state-of-the-art climate and numerical weather prediction models to capture these convective processes is also evaluated. During the suppressed periods, the CRMs are found to simulate a maximum moistening around 3 km, which is associated with a predominance of shallow convection. All SCMs produce adequate amounts of shallow convection during the suppressed periods, comparable to that seen in CRMs, but the relatively drier SCMs have higher precipitation rates than the relatively wetter SCMs and CRMs. The relatively drier SCMs dry, rather than moisten, the lower troposphere below the melting level. During the transition periods, convective processes act to moisten the atmosphere above the level at which mean advection changes from moistening to drying, despite an overall drying effect for the column. The SCMs capture some essence of this moistening at upper levels. A gradual transition from shallow to deep convection is simulated by the CRMs and the wetter SCMs during the transition periods, but the onset of deep convection is delayed in the drier SCMs. This results in lower precipitation rates for these SCMs during the active periods, although much better agreement exists between the models at this time.
Resumo:
Building refurbishment is key to reducing the carbon footprint and improving comfort in the built environment. However, quantifying the real benefit of a facade change, which can bring advantages to owners (value), occupants (comfort) and the society (sustainability), is not a simple task. At a building physics level, the changes in kWh per m2 of heating / cooling load can be readily quantified. However, there are many subtle layers of operation and mainte-nance below these headline figures which determine how sustainable a building is in reality, such as for example quality of life factors. This paper considers the range of approached taken by a fa/e refurbishment consortium to assess refurbishment solutions for multi-storey, multi-occupancy buildings and how to critically evaluate them. Each of the applued tools spans one or more of the three building parameters of people, product and process. 'De-cision making' analytical network process and parametric building analysis tools are described and their potential impact on the building refurbishment process evaluated.
Resumo:
Building services are worth about 2% GDP and are essential for the effective and efficient operations of the building. It is increasingly recognised that the value of a building is related to the way it supports the client organisation’s ongoing business operations. Building services are central to the functional performance of buildings and provide the necessary conditions for health, well-being, safety and security of the occupants. They frequently comprise several technologically distinct sub-systems and their design and construction requires the involvement of numerous disciplines and trades. Designers and contractors working on the same project are frequently employed by different companies. Materials and equipment is supplied by a diverse range of manufacturers. Facilities managers are responsible for operation of the building service in use. The coordination between these participants is crucially important to achieve optimum performance, but too often is neglected. This leaves room for serious faults. The need for effective integration is important. Modern technology offers increasing opportunities for integrated personal-control systems for lighting, ventilation and security as well as interoperability between systems. Opportunities for a new mode of systems integration are provided by the emergence of PFI/PPP procurements frameworks. This paper attempts to establish how systems integration can be achieved in the process of designing, constructing and operating building services. The essence of the paper therefore is to envisage the emergent organisational responses to the realisation of building services as an interactive systems network.
Resumo:
Purpose – For many academics in UK universities the nature and orientation of their research is overwhelmingly determined by considerations of how that work will be graded in research assessment exercises (RAEs). The grades awarded to work in a particular subject area can have a considerable impact on the individual and their university. There is a need to better understand those factors which may influence these grades. The paper seeks to address this issue. Design/methodology/approach – The paper considers relationships between the grades awarded and the quantitative information provided to the assessment panels for the 1996 and 2001 RAEs for two subject areas, built environment and town and country planning, and for three other subject areas, civil engineering, geography and archaeology, in the 2001 RAE. Findings – A simple model demonstrating strong and consistent relationships is established. RAE performance relates to numbers of research active staff, the production of books and journal papers, numbers of research studentships and graduations, and research income. Important differences between subject areas are identified. Research limitations/implications – Important issues are raised about the extent to which the new assessment methodology to be adopted for the 2008 RAE will capture the essence of good quality research in architecture and built environment. Originality/value – The findings provide a developmental perspective of RAEs and show how, despite a changed methodology, various research activities might be valued in the 2008 RAE. The basis for a methodology for reviewing the credibility of the judgements of panels is proposed.
Resumo:
Time dependent gas hold-up generated in the 0.3 and 0.6 m diameter vessels using high viscosity castor oil and carboxy methyl cellulose (CMC) solution was compared on the basis of impeller speed (N) and gas velocity (V-G). Two types of hold-up were distinguished-the hold-up due to tiny bubbles (epsilon(ft)) and total hold-up (epsilon(f)), which included large and tiny bubbles. It was noted that vessel diameter (i.e. the scale of operation) significantly influences (i) the trends and the values of epsilon(f) and epsilon(ft), and (ii) the values of tau (a constant reflecting the time dependency of hold-up). The results showed that a scale independent correlation for gas hold-up of the form epsilon(f) or epsilon(ft) = A(N or P-G/V)(a) (V-G)(b), where "a" and "b" are positive constants is not appropriate for viscous liquids. This warrants further investigations into the effect of vessel diameter on gas hold-up in impeller agitated high viscosity liquids (mu or mu(a) > 0.4 Pa s). (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Nanofiltration (NF) of model sugar solutions and commercial oligosaccharide mixtures were studied in both dead-end and cross-flow modes. Preliminary trials, with a dead-end filtration cell, demonstrated the feasibility of fractionating monosaccharides from disaccharides and oligosaccharides in mixtures, using loose nanofiltration (NF-CA-50, NF-TFC-50) membranes. During the nanofiltration purification of a commercial oligosaccharide mixture, yields of 19% (w w-1) for the monosaccharides and 88% (w w-1) for di, and oligosaccharides were obtained for the NF-TFC-50 membrane after four filtration steps, indicating that removal of the monosaccharides is possible, with only minor losses of the oligosaccharide content of the mixture. The effects of pressure, feed concentration, and filtration temperature were studied in similar experiments carried out in a cross-flow system, in full recycle mode of operation. The rejection rates of the sugar components increased with increasing pressure, and decreased with both increasing total sugar concentration in the feed and increasing temperature. Continuous diafiltration (CD) purification of model sugar solutions and commercial oligosaccharide mixtures using NF-CA-50 (at 25oC) and DS-5-DL (at 60oC) membranes, gave yield values of 14 to 18% for the monosaccharide, 59 to 89% for the disaccharide and 81 to 98% for the trisaccharide present in the feed. The study clearly demonstrates the potential of cross flow nanofiltration in the purification of oligosaccharide mixtures from the contaminant monosaccharides.
Resumo:
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.
Resumo:
The distinction between the essence of an object and its properties has been obscured in contemporary discussion of essentialism. Locke held that the properties of an object are exclusively those features that ‘flow’ from its essence. Here he follows the Aristotelian theory, leaving aside Locke’s own scepticism about the knowability of essence. I defend the need to distinguish sharply between essence and properties, arguing that essence must be given by form and that properties flow from form. I give a precise definition of what the term of art ‘flow’ amounts to, and apply the distinction to various kinds of taxonomic issues.
Resumo:
Dhaka cheese is a semihard artisanal variety originating from Bangladesh where manual curd kneading is a normal stage in its manufacture. Dhaka cheeses were produced with different degrees of curd kneading to quantify the curd manipulation process in terms of pressure and to standardise the length of operation. The effect of manipulation on the composition, rheology, texture and microstructure of fresh cheese was also studied. Manipulation had significant effects (P < 0.05–0.001) on most of the parameters studied. One minute of curd manipulation was found to be sufficient for Dhaka cheesemaking
Resumo:
The understanding of the statistical properties and of the dynamics of multistable systems is gaining more and more importance in a vast variety of scientific fields. This is especially relevant for the investigation of the tipping points of complex systems. Sometimes, in order to understand the time series of given observables exhibiting bimodal distributions, simple one-dimensional Langevin models are fitted to reproduce the observed statistical properties, and used to investing-ate the projected dynamics of the observable. This is of great relevance for studying potential catastrophic changes in the properties of the underlying system or resonant behaviours like those related to stochastic resonance-like mechanisms. In this paper, we propose a framework for encasing this kind of studies, using simple box models of the oceanic circulation and choosing as observable the strength of the thermohaline circulation. We study the statistical properties of the transitions between the two modes of operation of the thermohaline circulation under symmetric boundary forcings and test their agreement with simplified one-dimensional phenomenological theories. We extend our analysis to include stochastic resonance-like amplification processes. We conclude that fitted one-dimensional Langevin models, when closely scrutinised, may result to be more ad-hoc than they seem, lacking robustness and/or well-posedness. They should be treated with care, more as an empiric descriptive tool than as methodology with predictive power.
Resumo:
With the increasing pace of change, organisations have sought new real estate solutions which provide greater flexibility. What appears to be required is not flexibility for all uses but appropriate flexibility for the volatile, risky and temporal part of a business. This is the essence of the idea behind the split between the core and periphery portfolio. The serviced office has emerged to fill the need for absolute flexibility. This market is very diverse in terms of the product, services and target market. It has grown and gained credibility with occupiers and more recently with the property investment market. Occupiers similarly use this space in a variety of ways. Some solely occupy serviced space while others use it to complement their more permanent space. It therefore appears that the market is fulfilling the role of providing periphery space for at least some of the occupiers. In all instances the key to this space is a focus on financial and tenurial flexibility which is not provided by other types of business space offered.