872 resultados para Agent-based methodologies
Resumo:
Introduction: Lower back pain treatment and compensation costs >$80 billion overall in the US. 75% of back pain is due to disc degeneration in the lumbar region of the spine. Current treatment comprises of painkillers and bed rest or as a more radical solution – interbody cage fusion. In the early stages of disc degeneration the patient would benefit from addition of an injectable gel which polymerises in situ to support the degenerated nucleus pulposus. This involves a material which is an analogue of the natural tissue capable of restoring the biomechanical properties of the natural disc. The nucleus pulposus of the intervertebral disc is an example of a natural proteoglycan consisting of a protein core with negatively charged keratin and chondroitin sulphate attached. As a result of the high fixed charge density of the proteoglycan, the matrix exerts an osmotic swelling pressure drawing sufficient water into support the spinal system. Materials and Methods: NaAMPs (sodium 2- acrylamido 2-methyl propane sulphonic acid) and KSPA (potassium 3- sulphopropyl acrylate) were selected as monomers, the sulphonate group being used to mimic the natural sulphate group. These are used in dermal applications involving chronic wounds and have acceptably low cytotoxicity. Other hydrophilic carboxyl, amide and hydroxyl monomers such as 2-hydroxyethyl acrylamide, ß-carboxyethyl acrylate, acryloyl morpholine, and polyethylene glycol (meth)acrylate were used as diluents together with polyethyleneglycol di(meth)acrylate and hydrophilic multifunctional macromers as cross-linker. Redox was the chosen method of polymerisation and a range of initiators were investigated. Components were packaged in two solutions each containing a redox pair. A dual syringe method of injection into the cavity was used, the required time for polymerisation is circa 3-7 minutes. The final materials were tested using a Bohlin CVO Rheometer cycling from 0.5-25Hz at 37oC to measure the modulus. An in-house compression testing method was developed, using dialysis tubing to mimic the cavity, the gels were swelled in solutions of various osmolarity and compressed to ~ 20%. The pre-gel has also been injected into sheep spinal segments for mechanical compression testing to demonstrate the restoration of properties upon use of the gel. Results and Discussion: Two systems resulted using similar monomer compositions but different initiation and crosslinking agents. NaAMPs and KSPA were used together at a ratio of ~1:1 in both systems with 0.25-2% crosslinking agent, diacrylate or methacrylate. The two initiation systems were ascorbic acid/oxone, and N,N,N,N - tetramethylethylenediamine (TEMED)/ potassium persulphate. These systems produced gelation within 3-7 and 3-5 minutes respectively. Storage of the two component systems was shown to be stable for approximately one month after mixing, in the dark, refrigerated at 1-4oC. The gelation was carried out at 37oC. Literature values for the natural disc give elastic constants ranging from 3-8kPa. The properties of the polymer can be tailored by altering crosslink density and monomer composition and are able to match those of the natural disc. It is possible to incorporate a radio-opaque (histodenz) to enable x-ray luminescence during and after injection. At an inclusion level of 5% the gel is clearly visible and polymerisation and mechanical properties are not altered. Conclusion: A two-pac injection system which will polymerise in situ, that can incorporate a radio-opaque, has been developed. This will reinforce the damaged nucleus pulposus in degenerative disc disease restoring adequate hydration and thus biomechanical properties. Tests on sheep spine segments are currently being carried out to demonstrate that a disc containing the gel has similar properties to an intact disc in comparison to one with a damaged nucleus.
Resumo:
The study here highlights the potential that analytical methods based on Knowledge Discovery in Databases (KDD) methodologies have to aid both the resolution of unstructured marketing/business problems and the process of scholarly knowledge discovery. The authors present and discuss the application of KDD in these situations prior to the presentation of an analytical method based on fuzzy logic and evolutionary algorithms, developed to analyze marketing databases and uncover relationships among variables. A detailed implementation on a pre-existing data set illustrates the method. © 2012 Published by Elsevier Inc.
Resumo:
Decentralised supply chain formation involves determining the set of producers within a network able to supply goods to one or more consumers at the lowest cost. This problem is frequently tackled using auctions and negotiations. In this paper we show how it can be cast as an optimisation of a pairwise cost function. Optimising this class of functions is NP-hard but good approximations to the global minimum can be obtained using Loopy Belief Propagation (LBP). Here we detail a LBP-based approach to the supply chain formation problem, involving decentralised message-passing between potential participants. Our approach is evaluated against a well-known double-auction method and an optimal centralised technique, showing several improvements: it obtains better solutions for most networks that admit a competitive equilibrium Competitive equilibrium as defined in [3] is used as a means of classifying results on certain networks to allow for minor inefficiencies in their auction protocol and agent bidding strategies. while also solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions.
Resumo:
Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents (especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system.
Resumo:
Few works address methodological issues of how to conduct strategy-as-practice research and even fewer focus on how to analyse the subsequent data in ways that illuminate strategy as an everyday, social practice. We address this gap by proposing a quantitative method for analysing observational data, which can complement more traditional qualitative methodologies. We propose that rigorous but context-sensitive coding of transcripts can render everyday practice analysable statistically. Such statistical analysis provides a means for analytically representing patterns and shifts within the mundane, repetitive elements through which practice is accomplished. We call this approach the Event Database (EDB) and it consists of five basic coding categories that help us capture the stream of practice. Indexing codes help to index or categorise the data, in order to give context and offer some basic information about the event under discussion. Indexing codes are descriptive codes, which allow us to catalogue and classify events according to their assigned characteristics. Content codes are to do with the qualitative nature of the event; this is the essence of the event. It is a description that helps to inform judgements about the phenomenon. Nature codes help us distinguish between discursive and tangible events. We include this code to acknowledge that some events differ qualitatively from other events. Type events are codes abstracted from the data in order to help us classify events based on their description or nature. This involves significantly more judgement than the index codes but consequently is also more meaningful. Dynamics codes help us capture some of the movement or fluidity of events. This category has been included to let us capture the flow of activity over time.
Resumo:
Introduction - In recent years much progress has been made in the development of tools for systems biology to study the levels of mRNA and protein, and their interactions within cells. However, few multiplexed methodologies are available to study cell signalling directly at the transcription factor level. Methods - Here we describe a sensitive, plasmid-based RNA reporter methodology to study transcription factor activation in mammalian cells, and apply this technology to profiling 60 transcription factors in parallel. The methodology uses two robust and easily accessible detection platforms; quantitative real-time PCR for quantitative analysis and DNA microarrays for parallel, higher throughput analysis. Findings - We test the specificity of the detection platforms with ten inducers and independently validate the transcription factor activation. Conclusions - We report a methodology for the multiplexed study of transcription factor activation in mammalian cells that is direct and not theoretically limited by the number of available reporters.
Resumo:
This paper describes the design and evaluation of Aston-TAC, the runner-up in the Ad Auction Game of 2009 International Trading Agent Competition. In particular, we focus on how Aston-TAC generates adaptive bid prices according to the Market-based Value Per Click and how it selects a set of keyword queries to bid on to maximise the expected profit under limited conversion capacity. Through evaluation experiments, we show that AstonTAC performs well and stably not only in the competition but also across a broad range of environments. © 2010 The authors and IOS Press. All rights reserved.
Resumo:
Relationship-based approaches to leadership (e.g., Leader–Member Exchange theory) currently represent one of the most popular approaches to understanding workplace leadership. Although the concept of “relationship” is central to these approaches, generally this has not been well articulated and is often conceptualized simply in terms of relationship quality between the leader and the follower. In contrast, research in the wider relationship science domain provides a more detailed exposition of relationships and how they form and develop. We propose that research and methodology developed in relationship science (i.e., close relationships) can enhance understanding of the leader–follower relationship and therefore advance theory in this area. To address this issue, we organize our review in two areas. First, we examine how a social cognitive approach to close relationships can benefit an understanding of the leader–follower relationship (in terms of structure, content, and processes). Second, we show how the research designs and methodologies that have been developed in relationship science can be applied to understand better the leader–follower relationship. The cross-fertilization of research from the close relationships literature to understanding the leader–follower relationship provides new insights into leadership processes and potential avenues for further research. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
The potential benefits of implementing Component-Based Development (CBD) methodologies in a globally distributed environment are many. Lessons from the aeronautics, automotive, electronics and computer hardware industries, in which Component-Based (CB) architectures have been successfully employed for setting up globally distributed design and production activities, have consistently shown that firms have managed to increase the rate of reused components and sub-assemblies, and to speed up the design and production process of new products.
Resumo:
A significant forum of scholarly and practitioner-based research has developed in recent years that has sought both to theorize upon and empirically measure the competitiveness of regions. However, the disparate and fragmented nature of this work has led to the lack of a substantive theoretical foundation underpinning the various analyses and measurement methodologies employed. The aim of this paper is to place the regional competitiveness discourse within the context of theories of economic growth, and more particularly, those concerning regional economic growth. It is argued that regional competitiveness models are usually implicitly constructed in the lineage of endogenous growth frameworks, whereby deliberate investments in factors such as human capital and knowledge are considered to be key drivers of growth differentials. This leads to the suggestion that regional competitiveness can be usefully defined as the capacity and capability of regions to achieve economic growth relative to other regions at a similar overall stage of economic development, which will usually be within their own nation or continental bloc. The paper further assesses future avenues for theoretical and methodological exploration, highlighting the role of institutions, resilience and, well-being in understanding how the competitiveness of regions influences their long-term evolution.
Resumo:
Large-scale disasters are constantly occurring around the world, and in many cases evacuation of regions of city is needed. ‘Operational Research/Management Science’ (OR/MS) has been widely used in emergency planning for over five decades. Warning dissemination, evacuee transportation and shelter management are three ‘Evacuation Support Functions’ (ESF) generic to many hazards. This thesis has adopted a case study approach to illustrate the importance of integrated approach of evacuation planning and particularly the role of OR/MS models. In the warning dissemination phase, uncertainty in the household’s behaviour as ‘warning informants’ has been investigated along with uncertainties in the warning system. An agentbased model (ABM) was developed for ESF-1 with households as agents and ‘warning informants’ behaviour as the agent behaviour. The model was used to study warning dissemination effectiveness under various conditions of the official channel. In the transportation phase, uncertainties in the household’s behaviour such as departure time (a function of ESF-1), means of transport and destination have been. Households could evacuate as pedestrians, using car or evacuation buses. An ABM was developed to study the evacuation performance (measured in evacuation travel time). In this thesis, a holistic approach for planning the public evacuation shelters called ‘Shelter Information Management System’ (SIMS) has been developed. A generic allocation framework of was developed to available shelter capacity to the shelter demand by considering the evacuation travel time. This was formulated using integer programming. In the sheltering phase, the uncertainty in household shelter choices (either nearest/allocated/convenient) has been studied for its impact on allocation policies using sensitivity analyses. Using analyses from the models and detailed examination of household states from ‘warning to safety’, it was found that the three ESFs though sequential in time, however have lot of interdependencies from the perspective of evacuation planning. This thesis has illustrated an OR/MS based integrated approach including and beyond single ESF preparedness. The developed approach will help in understanding the inter-linkages of the three evacuation phases and preparing a multi-agency-based evacuation planning evacuation
Resumo:
This work attempts to shed light to the fundamental concepts behind the stability of Multi-Agent Systems. We view the system as a discrete time Markov chain with a potentially unknown transitional probability distribution. The system will be considered to be stable when its state has converged to an equilibrium distribution. Faced with the non-trivial task of establishing the convergence to such a distribution, we propose a hypothesis testing approach according to which we test whether the convergence of a particular system metric has occurred. We describe some artificial multi-agent ecosystems that were developed and we present results based on these systems which confirm that this approach qualitatively agrees with our intuition.
Resumo:
Pyrolytic recycling of materials for electronics and automotive is attractive because of the possibility of recovery of fuel and of precious metals from printed circuit. Due to the complexity of their composition an appropriate pre-treatment has to be performed in order to limit the evolution of dangerous halogen containing compounds which strongly impair the fuel quality. An advantageous pyrolysis approach implies the attempt of mineralisation of the organic bromine to the not volatile and harmless inorganic form using strong bases such as NaOH and KOH to reduce the amount of volatile and increasing the residue. The char stability is greatly variable depending on the substrate. Mg(OH)2 and Ca(OH)2 behave in a similar manner but to a lower extent. Carbonates and reducing agent such as LiAlH have been tested as well and their ability to scavenge bromine is discussed in terms of effectiveness and mechanism.
Resumo:
Smart grid technologies have given rise to a liberalised and decentralised electricity market, enabling energy providers and retailers to have a better understanding of the demand side and its response to pricing signals. This paper puts forward a reinforcement-learning-powered tool aiding an electricity retailer to define the tariff prices it offers, in a bid to optimise its retail strategy. In a competitive market, an energy retailer aims to simultaneously increase the number of contracted customers and its profit margin. We have abstracted the problem of deciding on a tariff price as faced by a retailer, as a semi-Markov decision problem (SMDP). A hierarchical reinforcement learning approach, MaxQ value function decomposition, is applied to solve the SMDP through interactions with the market. To evaluate our trading strategy, we developed a retailer agent (termed AstonTAC) that uses the proposed SMDP framework to act in an open multi-agent simulation environment, the Power Trading Agent Competition (Power TAC). An evaluation and analysis of the 2013 Power TAC finals show that AstonTAC successfully selects sell prices that attract as many customers as necessary to maximise the profit margin. Moreover, during the competition, AstonTAC was the only retailer agent performing well across all retail market settings.