866 resultados para modelling and simulation
Resumo:
This research examines dynamics associated with new representational technologies in complex organizations through a study of the use of a Single Model Environment, prototyping and simulation tools in the mega-project to construct Terminal 5 at Heathrow Airport, London. The ambition of the client, BAA. was to change industrial practices reducing project costs and time to delivery through new contractual arrangements and new digitally-enabled collaborative ways of working. The research highlights changes over time and addresses two areas of 'turbulence' in the use of: 1) technologies, where there is a dynamic tension between desires to constantly improve, change and update digital technologies and the need to standardise practices, maintaining and defending the overall integrity of the system; and 2) representations, where dynamics result from the responsibilities and liabilities associated with sharing of digital representations and a lack of trust in the validity of data from other firms. These dynamics are tracked across three stages of this well-managed and innovative project and indicate the generic need to treat digital infrastructure as an ongoing strategic issue.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
Purpose – To describe some research done, as part of an EPSRC funded project, to assist engineers working together on collaborative tasks. Design/methodology/approach – Distributed finite state modelling and agent techniques are used successfully in a new hybrid self-organising decision making system applied to collaborative work support. For the particular application, analysis of the tasks involved has been performed and these tasks are modelled. The system then employs a novel generic agent model, where task and domain knowledge are isolated from the support system, which provides relevant information to the engineers. Findings – The method is applied in the despatch of transmission commands within the control room of The National Grid Company Plc (NGC) – tasks are completed significantly faster when the system is utilised. Research limitations/implications – The paper describes a generic approach and it would be interesting to investigate how well it works in other applications. Practical implications – Although only one application has been studied, the methodology could equally be applied to a general class of cooperative work environments. Originality/value – One key part of the work is the novel generic agent model that enables the task and domain knowledge, which are application specific, to be isolated from the support system, and hence allows the method to be applied in other domains.
Resumo:
Driven by a range of modern applications that includes telecommunications, e-business and on-line social interaction, recent ideas in complex networks can be extended to the case of time-varying connectivity. Here we propose a general frame- work for modelling and simulating such dynamic networks, and we explain how the long time behaviour may reveal important information about the mechanisms underlying the evolution.
Resumo:
We investigated the effect of morphological differences on neuronal firing behavior within the hippocampal CA3 pyramidal cell family by using three-dimensional reconstructions of dendritic morphology in computational simulations of electrophysiology. In this paper, we report for the first time that differences in dendritic structure within the same morphological class can have a dramatic influence on the firing rate and firing mode (spiking versus bursting and type of bursting). Our method consisted of converting morphological measurements from three-dimensional neuroanatomical data of CA3 pyramidal cells into a computational simulator format. In the simulation, active channels were distributed evenly across the cells so that the electrophysiological differences observed in the neurons would only be due to morphological differences. We found that differences in the size of the dendritic tree of CA3 pyramidal cells had a significant qualitative and quantitative effect on the electrophysiological response. Cells with larger dendritic trees: (1) had a lower burst rate, but a higher spike rate within a burst, (2) had higher thresholds for transitions from quiescent to bursting and from bursting to regular spiking and (3) tended to burst with a plateau. Dendritic tree size alone did not account for all the differences in electrophysiological responses. Differences in apical branching, such as the distribution of branch points and terminations per branch order, appear to effect the duration of a burst. These results highlight the importance of considering the contribution of morphology in electrophysiological and simulation studies.
Resumo:
*** Purpose – Computer tomography (CT) for 3D reconstruction entails a huge number of coplanar fan-beam projections for each of a large number of 2D slice images, and excessive radiation intensities and dosages. For some applications its rate of throughput is also inadequate. A technique for overcoming these limitations is outlined. *** Design methodology/approach – A novel method to reconstruct 3D surface models of objects is presented, using, typically, ten, 2D projective images. These images are generated by relative motion between this set of objects and a set of ten fanbeam X-ray sources and sensors, with their viewing axes suitably distributed in 2D angular space. *** Findings – The method entails a radiation dosage several orders of magnitude lower than CT, and requires far less computational power. Experimental results are given to illustrate the capability of the technique *** Practical implications – The substantially lower cost of the method and, more particularly, its dramatically lower irradiation make it relevant to many applications precluded by current techniques *** Originality/value – The method can be used in many applications such as aircraft hold-luggage screening, 3D industrial modelling and measurement, and it should also have important applications to medical diagnosis and surgery.
Resumo:
A series of hexadentate ligands, H2Lm (m = 1−4), [1H-pyrrol-2-ylmethylene]{2-[2-(2-{[1H-pyrrol-2-ylmethylene]amino}phenoxy)ethoxy]phenyl}amine (H2L1), [1H-pyrrol-2-ylmethylene]{2-[4-(2-{[1H-pyrrol-2-ylmethylene]amino}phenoxy)butoxy]phenyl}amine (H2L2), [1H-pyrrol-2-ylmethylene][2-({2-[(2-{[1H-pyrrol-2-ylmethylene]amino}phenyl)thio]ethyl}thio)phenyl]amine (H2L3) and [1H-pyrrol-2-ylmethylene][2-({4-[(2-{[1H-pyrrol-2-lmethylene]amino}phenyl)thio]butyl}thio) phenyl]amine (H2L4) were prepared by condensation reaction of pyrrol-2-carboxaldehyde with {2-[2-(2-aminophenoxy)ethoxy]phenyl}amine, {2-[4-(2-aminophenoxy)butoxy]phenyl}amine, [2-({2-[(2-aminophenyl)thio]ethyl}thio)phenyl]amine and [2-({4-[(2-aminophenyl)thio]butyl}thio)phenyl]amine respectively. Reaction of these ligands with nickel(II) and copper(II) acetate gave complexes of the form MLm (m = 1−4), and the synthesized ligands and their complexes have been characterized by a variety of physico-chemical techniques. The solid and solution states investigations show that the complexes are neutral. The molecular structures of NiL3 and CuL2, which have been determined by single crystal X-ray diffraction, indicate that the NiL3 complex has a distorted octahedral coordination environment around the metal while the CuL2 complex has a seesaw coordination geometry. DFT calculations were used to analyse the electronic structure and simulation of the electronic absorption spectrum of the CuL2 complex using TDDFT gives results that are consistent with the measured spectroscopic behavior of the complex. Cyclic voltammetry indicates that all copper complexes are electrochemically inactive but the nickel complexes with softer thioethers are more easily oxidized than their oxygen analogs.
Resumo:
The time at which the signal of climate change emerges from the noise of natural climate variability (Time of Emergence, ToE) is a key variable for climate predictions and risk assessments. Here we present a methodology for estimating ToE for individual climate models, and use it to make maps of ToE for surface air temperature (SAT) based on the CMIP3 global climate models. Consistent with previous studies we show that the median ToE occurs several decades sooner in low latitudes, particularly in boreal summer, than in mid-latitudes. We also show that the median ToE in the Arctic occurs sooner in boreal winter than in boreal summer. A key new aspect of our study is that we quantify the uncertainty in ToE that arises not only from inter-model differences in the magnitude of the climate change signal, but also from large differences in the simulation of natural climate variability. The uncertainty in ToE is at least 30 years in the regions examined, and as much as 60 years in some regions. Alternative emissions scenarios lead to changes in both the median ToE (by a decade or more) and its uncertainty. The SRES B1 scenario is associated with a very large uncertainty in ToE in some regions. Our findings have important implications for climate modelling and climate policy which we discuss.
Resumo:
In 2003 the European Commission started using Impact Assessment (IA) as the main empirical basis for its major policy proposals. The aim was to systematically assess ex ante the economic, social and environmental impacts of EU policy proposals. In parallel, research proliferated in search for theoretical grounds for IAs and in an attempt to evaluate empirically the performance of the first sets of IAs produced by the European Commission. This paper combines conceptual and evaluative studies carried out in the first five years of EU IAs. It concludes that the great discrepancy between rationale and practice calls for a different theoretical focus and a higher emphasis on evaluating empirically crucial risk economics aspects of IAs, such as the value of statistical life, price of carbon, the integration of macroeconomic modelling and scenario analysis.
Resumo:
For people with motion impairments, access to and independent control of a computer can be essential. Symptoms such as tremor and spasm, however, can make the typical keyboard and mouse arrangement for computer interaction difficult or even impossible to use. This paper describes three approaches to improving computer input effectivness for people with motion impairments. The three approaches are: (1) to increase the number of interaction channels, (2) to enhance commonly existing interaction channels, and (3) to make more effective use of all the available information in an existing input channel. Experiments in multimodal input, haptic feedback, user modelling, and cursor control are discussed in the context of the three approaches. A haptically enhanced keyboard emulator with perceptive capability is proposed, combining approaches in a way that improves computer access for motion impaired users.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delay.