19 resultados para Imaginary and Real
em Aston University Research Archive
Resumo:
We used magnetoencephalography (MEG) to examine the nature of oscillatory brain rhythms when passively viewing both illusory and real visual contours. Three stimuli were employed: a Kanizsa triangle; a Kanizsa triangle with a real triangular contour superimposed; and a control figure in which the corner elements used to form the Kanizsa triangle were rotated to negate the formation of illusory contours. The MEG data were analysed using synthetic aperture magnetometry (SAM) to enable the spatial localisation of task-related oscillatory power changes within specific frequency bands, and the time-course of activity within given locations-of-interest was determined by calculating time-frequency plots using a Morlet wavelet transform. In contrast to earlier studies, we did not find increases in gamma activity (> 30 Hz) to illusory shapes, but instead a decrease in 10–30 Hz activity approximately 200 ms after stimulus presentation. The reduction in oscillatory activity was primarily evident within extrastriate areas, including the lateral occipital complex (LOC). Importantly, this same pattern of results was evident for each stimulus type. Our results further highlight the importance of the LOC and a network of posterior brain regions in processing visual contours, be they illusory or real in nature. The similarity of the results for both real and illusory contours, however, leads us to conclude that the broadband (< 30 Hz) decrease in power we observed is more likely to reflect general changes in visual attention than neural computations specific to processing visual contours.
Resumo:
In this study we investigate whether there exists a relationship between the exchange rate and the trade balance using bilateral data for the Mauritius/UK trade. We also investigate whether following depreciation or a devaluation the trade balance initially worsens due to contractual agreements and subsequently improves when new contracts for international trade are signed. Using a variety of econometric techniques we are able to establish that there exists a long-run relationship between the trade balance and the real exchange rate. The existence of such a relationship signifies that the authorities would be able to use the exchange rate to steer the trade balance. We also find following a depreciation or devaluation the trade balance initially worsens due to contractual agreements but the trade balance subsequently improves when new contracts are signed. This signifies that if the authorities want to devalue their currency to improve the trade balance, the desired effect does not occur immediately but it occurs with a lag, in this particular case after approximately a year.
Resumo:
Two main questions are addressed here: is there a long-run relationship between trade balance and real exchange rate for the bilateral trade between Mauritius and UK? Does a J-curve exist for this bilateral trade? Our findings suggest that the real exchange rate is cointegrated with the trade balance and we find evidence of a J-curve effect. We also find bidirectional causality between the trade balance and the real exchange rate in the long-run. The real exchange rate also causes the trade balance in the short-run. In an out-of-sample forecasting experiment, we also find that real exchange rate contains useful information that can explain future movements in the trade balance.
Resumo:
In the present work, the more important parameters of the heat pump system and of solar assisted heat pump systems were analysed in a quantitative way. Ideal and real Rankine cycles applied to the heat pump, with and without subcooling and superheating were studied using practical recommended values for their thermodynamics parameters. Comparative characteristics of refrigerants here analysed looking for their applicability in heat pumps for domestic heating and their effect in the performance of the system. Curves for the variation of the coefficient of performance as a function of condensing and evaporating temperatures were prepared for R12. Air, water and earth as low-grade heat sources and basic heat pump design factors for integrated heat pumps and thermal stores and for solar assisted heat pump-series, parallel and dual-systems were studied. The analysis of the relative performance of these systems demonstrated that the dual system presents advantages in domestic applications. An account of energy requirements for space and hater heating in the domestic sector in the O.K. is presented. The expected primary energy savings by using heat pumps to provide for the heating demand of the domestic sector was found to be of the order of 7%. The availability of solar energy in the U.K. climatic conditions and the characteristics of the solar radiation here studied. Tables and graphical representations in order to calculate the incident solar radiation over a tilted roof were prepared and are given in this study in section IV. In order to analyse and calculate the heating load for the system, new mathematical and graphical relations were developed in section V. A domestic space and water heating system is described and studied. It comprises three main components: a solar radiation absorber, the normal roof of a house, a split heat pump and a thermal store. A mathematical study of the heat exchange characteristics in the roof structure was done. This permits to evaluate the energy collected by the roof acting as a radiation absorber and its efficiency. An indication of the relative contributions from the three low-grade sources: ambient air, solar boost and heat loss from the house to the roof space during operation is given in section VI, together with the average seasonal performance and the energy saving for a prototype system tested at the University of Aston. The seasonal performance as found to be 2.6 and the energy savings by using the system studied 61%. A new store configuration to reduce wasted heat losses is also discussed in section VI.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
A multi-chromosome GA (Multi-GA) was developed, based upon concepts from the natural world, allowing improved flexibility in a number of areas including representation, genetic operators, their parameter rates and real world multi-dimensional applications. A series of experiments were conducted, comparing the performance of the Multi-GA to a traditional GA on a number of recognised and increasingly complex test optimisation surfaces, with promising results. Further experiments demonstrated the Multi-GA's flexibility through the use of non-binary chromosome representations and its applicability to dynamic parameterisation. A number of alternative and new methods of dynamic parameterisation were investigated, in addition to a new non-binary 'Quotient crossover' mechanism. Finally, the Multi-GA was applied to two real world problems, demonstrating its ability to handle mixed type chromosomes within an individual, the limited use of a chromosome level fitness function, the introduction of new genetic operators for structural self-adaptation and its viability as a serious real world analysis tool. The first problem involved optimum placement of computers within a building, allowing the Multi-GA to use multiple chromosomes with different type representations and different operators in a single individual. The second problem, commonly associated with Geographical Information Systems (GIS), required a spatial analysis location of the optimum number and distribution of retail sites over two different population grids. In applying the Multi-GA, two new genetic operators (addition and deletion) were developed and explored, resulting in the definition of a mechanism for self-modification of genetic material within the Multi-GA structure and a study of this behaviour.
Resumo:
This thesis focuses on the theoretical examination of the exchange rate economic (operating) exposure within the context of the theory of the firm, and proposes some hedging solutions using currency options. The examination of economic exposure is based on such parameters as firms' objectives, industry structure and production cost efficiency. In particular, it examines an hypothetical exporting firm with costs in domestic currency, which faces competition from foreign firms in overseas markets and has a market share expansion objective. Within this framework, the hypothesis is established that economic exposure, portrayed in a diagram connecting export prices and real exchange rates, is asymmetric (i.e. the negative effects depreciation are higher than the positive effects of a currency depreciation). In this case, export business can be seen as a real option, given by exporting firms to overseas customer. Different scenarios about the asymmetry hypothesis can be derived for different assumptions about the determinants of economic exposure. Having established the asymmetry hypothesis, the hedging against this exposure is analysed. The hypothesis is established, that a currency call option should be used in hedging against asymmetric economic exposure. Further, some advanced currency options stategies are discussed, and their use in hedging several scenarios of exposure is indicated, establishing the hypothesis that, the optimal options strategy is a function of the determinants of exposure. Some extensions on the theoretical analysis are examined. These include the hedging of multicurrency exposure using options, and the exposure of a purely domestic firm facing import competition. The empirical work addresses two issues: the empirical validity of the asymmetry hypothesis and the examination of the hedging effectiveness of currency options.
Resumo:
Results of a pioneering study are presented in which for the first time, crystallization, phase separation and Marangoni instabilities occurring during the spin-coating of polymer blends are directly visualized, in real-space and real-time. The results provide exciting new insights into the process of self-assembly, taking place during spin-coating, paving the way for the rational design of processing conditions, to allow desired morphologies to be obtained. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.
Resumo:
We report statistical time-series analysis tools providing improvements in the rapid, precision extraction of discrete state dynamics from time traces of experimental observations of molecular machines. By building physical knowledge and statistical innovations into analysis tools, we provide techniques for estimating discrete state transitions buried in highly correlated molecular noise. We demonstrate the effectiveness of our approach on simulated and real examples of steplike rotation of the bacterial flagellar motor and the F1-ATPase enzyme. We show that our method can clearly identify molecular steps, periodicities and cascaded processes that are too weak for existing algorithms to detect, and can do so much faster than existing algorithms. Our techniques represent a step in the direction toward automated analysis of high-sample-rate, molecular-machine dynamics. Modular, open-source software that implements these techniques is provided.
Resumo:
Premium intraocular lenses (IOLs) aim to surgically correct astigmatism and presbyopia following cataract extraction, optimising vision and eliminating the need for cataract surgery in later years. It is usual to fully correct astigmatism and to provide visual correction for distance and near when prescribing spectacles and contact lenses, however for correction with the lens implanted during cataract surgery, patients are required to purchase the premium IOLs and pay surgery fees outside the National Health Service in the UK. The benefit of using toric IOLs was thus demonstrated, both in standard visual tests and real-world situations. Orientation of toric IOLs during implantation is critical and the benefit of using conjunctival blood vessels for alignment was shown. The issue of centration of IOLs relative to the pupil was also investigated, showing changes with the amount of dilation and repeat dilation evaluation, which must be considered during surgery to optimize the visual performance of premium IOLs. Presbyopia is a global issue, of growing importance as life expectancy increases, with no real long-term cure. Despite enhanced lifestyles, changes in diet and improved medical care, presbyopia still presents in modern life as a significant visual impairment. The onset of presbyopia was found to vary with risk factors including alcohol consumption, smoking, UV exposure and even weight as well as age. A new technique to make measurement of accommodation more objective and robust was explored, although needs for further design modifications were identified. Due to dysphotopsia and lack of intermediate vision through most multifocal IOL designs, the development of a trifocal IOL was shown to minimize these aspects. The current thesis, therefore, emphasises the challenges of premium IOL surgery and need for refinement for optimum visual outcome in addition to outlining how premium IOLs may provide long-term and successful correction of astigmatism and presbyopia.
Resumo:
A recent novel approach to the visualisation and analysis of datasets, and one which is particularly applicable to those of a high dimension, is discussed in the context of real applications. A feed-forward neural network is utilised to effect a topographic, structure-preserving, dimension-reducing transformation of the data, with an additional facility to incorporate different degrees of associated subjective information. The properties of this transformation are illustrated on synthetic and real datasets, including the 1992 UK Research Assessment Exercise for funding in higher education. The method is compared and contrasted to established techniques for feature extraction, and related to topographic mappings, the Sammon projection and the statistical field of multidimensional scaling.