31 resultados para dynamic and static collection

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study heterogeneity among nodes in self-organizing smart camera networks, which use strategies based on social and economic knowledge to target communication activity efficiently. We compare homogeneous configurations, when cameras use the same strategy, with heterogeneous configurations, when cameras use different strategies. Our first contribution is to establish that static heterogeneity leads to new outcomes that are more efficient than those possible with homogeneity. Next, two forms of dynamic heterogeneity are investigated: nonadaptive mixed strategies and adaptive strategies, which learn online. Our second contribution is to show that mixed strategies offer Pareto efficiency consistently comparable with the most efficient static heterogeneous configurations. Since the particular configuration required for high Pareto efficiency in a scenario will not be known in advance, our third contribution is to show how decentralized online learning can lead to more efficient outcomes than the homogeneous case. In some cases, outcomes from online learning were more efficient than all other evaluated configuration types. Our fourth contribution is to show that online learning typically leads to outcomes more evenly spread over the objective space. Our results provide insight into the relationship between static, dynamic, and adaptive heterogeneity, suggesting that all have a key role in achieving efficient self-organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agent-based technology is playing an increasingly important role in today’s economy. Usually a multi-agent system is needed to model an economic system such as a market system, in which heterogeneous trading agents interact with each other autonomously. Two questions often need to be answered regarding such systems: 1) How to design an interacting mechanism that facilitates efficient resource allocation among usually self-interested trading agents? 2) How to design an effective strategy in some specific market mechanisms for an agent to maximise its economic returns? For automated market systems, auction is the most popular mechanism to solve resource allocation problems among their participants. However, auction comes in hundreds of different formats, in which some are better than others in terms of not only the allocative efficiency but also other properties e.g., whether it generates high revenue for the auctioneer, whether it induces stable behaviour of the bidders. In addition, different strategies result in very different performance under the same auction rules. With this background, we are inevitably intrigued to investigate auction mechanism and strategy designs for agent-based economics. The international Trading Agent Competition (TAC) Ad Auction (AA) competition provides a very useful platform to develop and test agent strategies in Generalised Second Price auction (GSP). AstonTAC, the runner-up of TAC AA 2009, is a successful advertiser agent designed for GSP-based keyword auction. In particular, AstonTAC generates adaptive bid prices according to the Market-based Value Per Click and selects a set of keyword queries with highest expected profit to bid on to maximise its expected profit under the limit of conversion capacity. Through evaluation experiments, we show that AstonTAC performs well and stably not only in the competition but also across a broad range of environments. The TAC CAT tournament provides an environment for investigating the optimal design of mechanisms for double auction markets. AstonCAT-Plus is the post-tournament version of the specialist developed for CAT 2010. In our experiments, AstonCAT-Plus not only outperforms most specialist agents designed by other institutions but also achieves high allocative efficiencies, transaction success rates and average trader profits. Moreover, we reveal some insights of the CAT: 1) successful markets should maintain a stable and high market share of intra-marginal traders; 2) a specialist’s performance is dependent on the distribution of trading strategies. However, typical double auction models assume trading agents have a fixed trading direction of either buy or sell. With this limitation they cannot directly reflect the fact that traders in financial markets (the most popular application of double auction) decide their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Experiments are conducted under both dynamic and static settings of the continuous BDA market. We find that the allocative efficiency of a continuous BDA market mainly comes from rational selection of trading directions. Furthermore, we introduce a high-performance Kernel trading strategy in the BDA market which uses kernel probability density estimator built on historical transaction data to decide optimal order prices. Kernel trading strategy outperforms some popular intelligent double auction trading strategies including ZIP, GD and RE in the continuous BDA market by making the highest profit in static games and obtaining the best wealth in dynamic games.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 − F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 - F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints. © 2014 The Author(s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis described the research carried out on the development of a novel hardwired tactile sensing system tailored for the application of a next generation of surgical robotic and clinical devices, namely a steerable endoscope with tactile feedback, and a surface plate for patient posture and balance. Two case studies are examined. The first is a one-dimensional sensor for the steerable endoscope retrieving shape and ‘touch’ information. The second is a two-dimensional surface which interprets the three-dimensional motion of a contacting moving load. This research can be used to retrieve information from a distributive tactile sensing surface of a different configuration, and can interpret dynamic and static disturbances. This novel approach to sensing has the potential to discriminate contact and palpation in minimal invasive surgery (MIS) tools, and posture and balance in patients. The hardwired technology uses an embedded system based on Field Programmable Gate Arrays (FPGA) as the platform to perform the sensory signal processing part in real time. High speed robust operation is an advantage from this system leading to versatile application involving dynamic real time interpretation as described in this research. In this research the sensory signal processing uses neural networks to derive information from input pattern from the contacting surface. Three neural network architectures namely single, multiple and cascaded were introduced in an attempt to find the optimum solution for discrimination of the contacting outputs. These architectures were modelled and implemented into the FPGA. With the recent introduction of modern digital design flows and synthesis tools that essentially take a high-level sensory processing behaviour specification for a design, fast prototyping of the neural network function can be achieved easily. This thesis outlines the challenge of the implementations and verifications of the performances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives and Methods: Contact angle, as a representative measure of surface wettability, is often employed to interpret contact lens surface properties. The literature is often contradictory and can lead to confusion. This literature review is part of a series regarding the analysis of hydrogel contact lenses using contact angle techniques. Here we present an overview of contact angle terminology, methodology, and analysis. Having discussed this background material, subsequent parts of the series will discuss the analysis of contact lens contact angles and evaluate differences in published laboratory results. Results: The concepts of contact angle, wettability and wetting are presented as an introduction. Contact angle hysteresis is outlined and highlights the advantages in using dynamic analytical techniques over static methods. The surface free energy of a material illustrates how contact angle analysis is capable of providing supplementary surface characterization. Although single values are able to distinguish individual material differences, surface free energy and dynamic methods provide an improved understanding of material behavior. The frequently used sessile drop, captive bubble, and Wilhelmy plate techniques are discussed. Their use as both dynamic and static methods, along with the advantages and disadvantages of each technique, is explained. Conclusions: No single contact angle technique fully characterizes the wettability of a material surface, and the application of complimenting methods allows increased characterization. At present, there is not an ISO standard method designed for soft materials. It is important that each contact angle technique has a standard protocol, as small protocol differences between laboratories often contribute to a variety of published data that are not easily comparable. © 2013 Contact Lens Association of Ophthalmologists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motion is an important aspect of face perception that has been largely neglected to date. Many of the established findings are based on studies that use static facial images, which do not reflect the unique temporal dynamics available from seeing a moving face. In the present thesis a set of naturalistic dynamic facial emotional expressions was purposely created and used to investigate the neural structures involved in the perception of dynamic facial expressions of emotion, with both functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend the distributed neural system for face perception (Haxby et al.,2000). Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as inferior occipital gyri and superior temporal sulci, along with coupling between superior temporal sulci and amygdalae, as well as with inferior frontal gyri. MEG and Synthetic Aperture Magnetometry (SAM) were used to examine the spatiotemporal profile of neurophysiological activity within this dynamic face perception network. SAM analysis revealed a number of regions showing differential activation to dynamic versus static faces in the distributed face network, characterised by decreases in cortical oscillatory power in the beta band, which were spatially coincident with those regions that were previously identified with fMRI. These findings support the presence of a distributed network of cortical regions that mediate the perception of dynamic facial expressions, with the fMRI data providing information on the spatial co-ordinates paralleled by the MEG data, which indicate the temporal dynamics within this network. This integrated multimodal approach offers both excellent spatial and temporal resolution, thereby providing an opportunity to explore dynamic brain activity and connectivity during face processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper builds on Granovetter's distinction between strong and weak ties [Granovetter, M. S. 1973. The strength of weak ties. Amer. J. Sociol. 78(6) 1360–1380] in order to respond to recent calls for a more dynamic and processual understanding of networks. The concepts of potential and latent tie are deductively identified, and their implications for understanding how and why networks emerge, evolve, and change are explored. A longitudinal empirical study conducted with companies operating in the European motorsport industry reveals that firms take strategic actions to search for potential ties and reactivate latent ties in order to solve problems of network redundancy and overload. Examples are given, and their characteristics are examined to provide theoretical elaboration of the relationship between the types of tie and network evolution. These conceptual and empirical insights move understanding of the managerial challenge of building effective networks beyond static structural contingency models of optimal network forms to highlight the processes and capabilities of dynamic relationship building and network development. In so doing, this paper highlights the interrelationship between search and redundancy and the scope for strategic action alongside path dependence and structural influences on network processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the global Internet economy, e-business as a driving force to redefine business models and operational processes is posing new challenges for traditional organizational structures and information system (IS) architectures. These are showing promises of a renewed period of innovative thinking in e-business strategies with new enterprise paradigms and different Enterprise Resource Planning (ERP) systems. In this chapter, the authors consider and investigate how dynamic e-business strategies, as the next evolutionary generation of e-business, can be realized through newly diverse enterprise structures supported by ERP, ERPII and so-called "ERPIII" solutions relying on the virtual value chain concept. Exploratory inductive multi-case studies in manufacturing and printing industries have been conducted. Additionally, it proposes a conceptual framework to discuss the adoption and governance of ERP systems within the context of three enterprise forms for enabling dynamic and collaborative e-business strategies, and particularly demonstrate how an enterprise can dynamically migrate from its current position to the patterns it desires to occupy in the future - a migration that must and will include dynamic e-business as a core competency, but that also relies heavily on ERP-based backbone and other robust technological platform and applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To describe the methodology, sampling strategy and preliminary results for the Aston Eye Study (AES), a cross-sectional study to determine the prevalence of refractive error and its associated ocular biometry in a large multi-racial sample of school children from the metropolitan area of Birmingham, England. Methods: A target sample of 1700 children aged 6–7 years and 1200 aged 12–13 years is being selected from Birmingham schools selected randomly with stratification by area deprivation index (a measure of socio-economic status). Schools with pupils predominantly (>70%) from a single race are excluded. Sample size calculations account for the likely participation rate and the clustering of individuals within schools. Procedures involve standardised protocols to allow for comparison with international population-based data. Visual acuity, non-contact ocular biometry (axial length, corneal radius of curvature and anterior chamber depth) and cycloplegic autorefraction are measured in both eyes. Distance and near oculomotor balance, height and weight are also assessed. Questionnaires for parents and older children will allow the influence of environmental factors on refractive error to be examined. Results: Recruitment and data collection are ongoing (currently N = 655). Preliminary cross-sectional data on 213 South Asian, 44 black African Caribbean and 70 white European children aged 6–7 years and 114 South Asian, 40 black African Caribbean and 115 white European children aged 12–13 years found myopia prevalence of 9.4% and 29.4% for the two age groups respectively. A more negative mean spherical equivalent refraction (SER) was observed in older children (-0.21 D vs +0.87 D). Ethnic differences in myopia prevalence are emerging with South Asian children having higher levels than white European children 36.8% vs 18.6% (for the older children). Axial length, corneal radius of curvature and anterior chamber depth were normally distributed, while SER was leptokurtic (p < 0.001) with a slight negative skew. Conclusions: The AES will allow ethnic differences in the ocular characteristics of children from a large metropolitan area of the UK to be examined. The findings to date indicate the emergence of higher levels of myopia by early adolescence in second and third generation British South Asians, compared to white European children. The continuation of the AES will allow the early determinants of these ethnic differences to be studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Operations management (OM) represents a dynamic and significant field of scholarly research and writing. Changes in the business environment over the last fifteen years has driven rapid developments in OM practice so that the production of goods and provision of services are now more market focused rather than technology led.This collection defines the nature and meaning of operations management. It draws together leading-edge papers that reveal the state of operations management today and classic articles that chart the development of practice to the present. These three volumes assemble the work of internationally renowned scholars and look at the following key areas: Volume One: Operations Management Concepts and Strategy Volume Two: The Design of Operations Systems Volume Three: Operations Planning and Control

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-tier study is presented in this thesis. The first involves the commissioning of an extant but at the time, unproven bubbling fluidised bed fast pyrolysis unit. The unit was designed for an intended nominal throughput of 300 g/h of biomass. The unit came complete with solids separation, pyrolysis vapour quenching and oil collection systems. Modifications were carried out on various sections of the system including the reactor heating, quenching and liquid collection systems. The modifications allowed for fast pyrolysis experiments to be carried out at the appropriate temperatures. Bio-oil was generated using conventional biomass feedstocks including Willow, beechwood, Pine and Miscanthus. Results from this phase of the research showed however, that although the rig was capable of processing biomass to bio-oil, it was characterised by low mass balance closures and recurrent operational problems. The problems included blockages, poor reactor hydrodynamics and reduced organic liquid yields. The less than optimal performance of individual sections, particularly the feed and reactor systems of the rig, culminated in a poor overall performance of the system. The second phase of this research involved the redesign of two key components of the unit. An alternative feeding system was commissioned for the unit. The feed system included an off the shelf gravimetric system for accurate metering and efficient delivery of biomass. Similarly, a new bubbling fluidised bed reactor with an intended nominal throughput of 500g/h of biomass was designed and constructed. The design leveraged on experience from the initial commissioning phase with proven kinetic and hydrodynamic studies. These units were commissioned as part of the optimisation phase of the study. Also as part of this study, two varieties each, of previously unreported feedstocks namely Jatropha curcas and Moringa olifiera oil seed press cakes were characterised to determine their suitability as feedstocks for liquid fuel production via fast pyrolysis. Consequently, the feedstocks were used for the production of pyrolysis liquids. The quality of the pyrolysis liquids from the feedstocks were then investigated via a number of analytical techniques. The oils from the press cakes showed high levels of stability and reduced pH values. The improvements to the design of the fast pyrolysis unit led to higher mass balance closures and increased organic liquid yields. The maximum liquid yield obtained from the press cakes was from African Jatropha press cake at 66 wt% on a dry basis.