853 resultados para General Systems Theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer-based, socio-technical systems projects are frequently failures. In particular, computer-based information systems often fail to live up to their promise. Part of the problem lies in the uncertainty of the effect of combining the subsystems that comprise the complete system; i.e. the system's emergent behaviour cannot be predicted from a knowledge of the subsystems. This paper suggests uncertainty management is a fundamental unifying concept in analysis and design of complex systems and goes on to indicate that this is due to the co-evolutionary nature of the requirements and implementation of socio-technical systems. The paper shows a model of the propagation of a system change that indicates that the introduction of two or more changes over time can cause chaotic emergent behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although collaboration manifestly takes place in time, the role of time in shaping the behaviour of collaborations, and collaborative systems, is not well understood. Time is more than clock-time or the subjective experience of time; its effects on systems include differential rates of change of system elements, temporally non-linear behaviour and phenomena such as entrainment and synchronization. As a system driver, it generates emergent effects shaping systems and their behaviour. In the paper we present a systems view of time, and consider the implications of such a view through the case of collaborative development of a new university timetabling system. Teasing out the key temporal phenomena using the notion of temporal trajectories helps us understand the emergent temporal behaviour and suggests a means for improving outcomes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we consider the co-evolutionary dynamics of IS engagement where episodic change of implementation increasingly occurs within the context of linkages and interdependencies between systems and processes within and across organisations. Although there are many theories that interpret the various motors of change be it lifecycle, teleological, dialectic or evolutionary, our paper attempts to move towards a unifying view of change by studying co-evolutionary dynamics from a complex systems perspective. To understand how systems and organisations co-evolve in practice and how order emerges, or fails to emerge, we adopt complex adaptive systems theory to incorporate evolutionary and teleological motors, and actor-network theory to incorporate dialectic motors. We illustrate this through the analysis of the implementation of a novel academic scheduling system at a large research-intensive Australian university.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rail corrugation consists of undesirable periodic fluctuations in wear on railway track and costs the railway industry substantially for it's removal by regrinding. Much research has been performed on this problem, particularly over the past two decades, however, a reliable cure remains elusive for wear-type corrugations. Recently the growth behaviour of wear-type rail corrugation-has been investigated using theoretical and experimental models as part of the RailCRC Project (#18). A critical part of this work is the tuning and validation of these models via an extensive field testing program. Rail corrugations have been monitored for 2 years on sites throughout Australia. Measured rail surface profiles are used to determine corrugation growth rates on each site. Growth rates and other characteristics are compared with theoretical predictions from a computer model for validation. The results from several pertinent sites are presented and discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta pesquisa tem como objetivo observar o casamento inter-religioso. O encontro de duas culturas religiosas pode e, provavelmente, constituirá fonte de conflito. Os conflitos emergentes podem ocorrer, não por uma visão diferente de mundo, mas essencialmente porque o outro, por ser diferente, ameaça a identidade do indivíduo. Frente à ameaça, é necessário fortalecer a própria identidade. A partir de entrevistas com seis casais em casamentos inter-religiosos, mais especificamente, entre cristãos e judeus, e com filhos com idade entre zero e cinco anos e entre quatorze e vinte e quatro anos de idade, residentes em São Paulo, Capital, pretendi analisar como os casais lidam com os desafios que surgem quando um cônjuge pertence a uma tradição religiosa diferente da do outro. Dentre os desafios, está o de lidar com a educação religiosa ou a formação espiritual de seus filhos. Utilizando-se como referencial a Terapia Sistêmica Familiar, principalmente o trabalho de Paul Watzlawick sobre a comunicação e o de Murray Bowen sobre o funcionamento humano dentro dos sistemas familiares, além de outros referenciais auxiliares para trabalhar a questão intercultural no casamento, pretendi discutir as implicações para a práxis religiosa e oferecer contribuições à clínica psicológica e às ciências da religião. A psicologia necessita repensar sua prática, deixando o preconceito em relação à religião de lado e incluindo essa em seus estudos, de modo a aproximar o discurso e a prática do terapeuta, uma vez que pode tomar consciência de seus próprios valores religiosos quando buscar compreender a religião e a espiritualidade de sua clientela. Por sua vez, as instituições religiosas necessitam refletir sobre sua práxis, de modo a alcançar as famílias que se encontram na periferia das religiões. Famílias que solicitam uma orientação, uma formação religiosa, mas que, sendo inter-religiosas, necessitam ser reconhecidas e respeitadas como tal. Portanto, as igrejas precisam abrir-se, deixar de olhar para dentro de si mesmas e servir ao mundo, mesmo que parte desse mundo nunca venha a se tornar formalmente membro da comunidade.(AU)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Historicamente, ao serem criadas, novas mídias se apropriam de recursos de linguagens de outras mídias pré-existentes. Na medida em que as tecnologias da mídia se desenvolvem o mesmo ocorre com as linguagens, de forma a adaptarem-se, simultaneamente, ao meio e mensagens; modos de produção; e condições ideais de interação com os usuários. As mídias digitais, por sua natureza, dispõem de interfaces performáticas imagens-pensantes que permitem mais que a simples representação estética de conteúdos. Neste contexto, se insere a problemática desta pesquisa: Quais teorias transdisciplinares podem contribuir para a compreensão dos complexos processos comunicacionais que envolvem o relacionamento entre seres humanos e mídias digitais com propósito de aprendizagem? O objetivo desta pesquisa foi o de estender o modelo desenvolvido por Stephen Littlejohn e incluir novos conceitos e generalizações, provenientes de outros ramos da ciência com diferentes 'visões de mundo', visando ampliar a proposta de Littlejohn para um Modelo Transdisciplinar para Comunicação com Mídias Digitais, que, em nossa perspectiva, contribui para explicar os fenômenos pertinentes à relação de humanos com mídias digitais, principalmente em processos de aprendizagem de ciências. A pesquisa foi feita com métodos de pesquisa Bibliográfica e Descritiva.(AU)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The concept of entropy rate is well defined in dynamical systems theory but is impossible to apply it directly to finite real world data sets. With this in mind, Pincus developed Approximate Entropy (ApEn), which uses ideas from Eckmann and Ruelle to create a regularity measure based on entropy rate that can be used to determine the influence of chaotic behaviour in a real world signal. However, this measure was found not to be robust and so an improved formulation known as the Sample Entropy (SampEn) was created by Richman and Moorman to address these issues. We have developed a new, related, regularity measure which is not based on the theory provided by Eckmann and Ruelle and proves a more well-behaved measure of complexity than the previous measures whilst still retaining a low computational cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A review is given of general chromatographic theory, the factors affecting the performance of chromatographi c columns, and aspects of scale-up of the chromatographic process. The theory of gel permeation chromatography (g. p. c.) is received, and the results of an experimental study to optimize the performance of an analytical g.p.c. system are reported. The design and construction of a novel sequential continuous chromatographic refining unit (SCCR3), for continuous liquid-liquid chromatography applications, is described. Counter-current operation is simulated by sequencing a system of inlet and outlet port functions around a connected series of fixed, 5.1 cm internal diameter x 70 cm long, glass columns. The number of columns may be varied, and, during this research, a series of either twenty or ten columns was used. Operation of the unit for continuous fractionation of a dextran polymer (M. W. - 30,000) by g.p.c. is reported using 200-400 µm diameter porous silica beads (Spherosil XOB07S) as packing, and distilled water for the mobile phase. The effects of feed concentration, feed flow rate, and mobile and stationary phase flow rates have been investigated, by means of both product, and on-column, concentrations and molecular weight distributions. The ability to operate the unit successfully at on-column concentrations as high as 20% w/v dextran has been demonstrated, and removal of both high and low molecular weight ends of a polymer feed distribution, to produce products meeting commercial specifications, has been achieved. Equivalent throughputs have been as high as 2.8 tonnes per annum for ten columns, based on continuous operation for 8000 hours per annum. A concentration dependence of the equilibrium distribution coefficient, KD observed during continuous fractionation studies, is related to evidence in the literature and experimental results obtained on a small-scale batch column. Theoretical treatments of the counter-current chromatographic process are outlined, and a preliminary computer simulation of the SCCR3 unit t is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A review of the general chromatographic theory and of continuous chromatographic techniques has been carried out. Three methods of inversion of sucrose to glucose and fructose in beet molasses were explored. These methods were the inversion of sucrose using the enzyme invertase, by the use of hydrochloric acid and the use of the resin Amberlite IR118 in the H+ form. The preferred method on economic and purity considerations was by the use of the enzyme invertase. The continuous chromatographic separation of inverted beet molasses resulting in a fructose rich product and a product containing glucose and other non-sugars was carried out using a semi-continuous counter-current chromatographic refiner (SCCR6), consisting of ten 10.8cm x 75cm long stainless steel columns packed with a calcium charged 8% cross-linked polystyrene resin Zerolit SRC 14. Based on the literature this is the first time such a continuous separation has been attempted. It was found that the cations present in beet molasses displaced the calcium ions from the resin resulting in poor separation of the glucose and fructose. Three methods of maintaining the calcium form of the resin during the continuous operation of the equipment were established. Passing a solution of calcium nitrate through the purge column for half a switch period was found to be most effective as there was no contamination of the main fructose rich product and the product concentrations were increased by 50%. When a 53% total solids (53 Brix) molasses feedstock was used, the throughput was 34.13kg sugar solids per m3 of resin per hour. Product purities of 97% fructose in fructose rich (FRP) and 96% glucose in the glucose rich (GRP) products were obtained with product concentrations of 10.93 %w/w for the FRP and 10.07 %w/w for the GRP. The effects of flowrates, temperature and background sugar concentration on the distribution coefficients of fructose, glucose, betaine and an ionic component of beet molasses were evaluated and general relationships derived. The computer simulation of inverted beet molasses separations on an SCCR system has been carried out successfully.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Risk and knowledge are two concepts and components of business management which have so far been studied almost independently. This is especially true where risk management is conceived mainly in financial terms, as, for example, in the banking sector. The banking sector has sophisticated methodologies for managing risk, such as mathematical risk modeling. However. the methodologies for analyzing risk do not explicitly include knowledge management for risk knowledge creation and risk knowledge transfer. Banks are affected by internal and external changes with the consequent accommodation to new business models new regulations and the competition of big players around the world. Thus, banks have different levels of risk appetite and policies in risk management. This paper takes into consideration that business models are changing and that management is looking across the organization to identify the influence of strategic planning, information systems theory, risk management and knowledge management. These disciplines can handle the risks affecting banking that arise from different areas, but only if they work together. This creates a need to view them in an integrated way. This article sees enterprise risk management as a specific application of knowledge in order to control deviation from strategic objectives, shareholders' values and stakeholders' relationships. Before and after a modeling process it necessary to find insights into how the application of knowledge management processes can improve the understanding of risk and the implementation of enterprise risk management. The article presents a propose methodology to contribute to providing a guide for developing risk modeling knowledge and a reduction of knowledge silos, in order to improve the quality and quantity of solutions related to risk inquiries across the organization.