809 resultados para Agent-based brokerage platform


Relevância:

30.00% 30.00%

Publicador:

Resumo:

C.H. Orgill, N.W. Hardy, M.H. Lee, and K.A.I. Sharpe. An application of a multiple agent system for flexible assemble tasks. In Knowledge based envirnments for industrial applications including cooperating expert systems in control. IEE London, 1989.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ROSSI: Emergence of communication in Robots through Sensorimotor and Social Interaction, T. Ziemke, A. Borghi, F. Anelli, C. Gianelli, F. Binkovski, G. Buccino, V. Gallese, M. Huelse, M. Lee, R. Nicoletti, D. Parisi, L. Riggio, A. Tessari, E. Sahin, International Conference on Cognitive Systems (CogSys 2008), University of Karlsruhe, Karlsruhe, Germany, 2008 Sponsorship: EU-FP7

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" – both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Making use of very detailed neurophysiological, anatomical, and behavioral data to build biological-realistic computational models of animal behavior is often a difficult task. Until recently, many software packages have tried to resolve this mismatched granularity with different approaches. This paper presents KInNeSS, the KDE Integrated NeuroSimulation Software environment, as an alternative solution to bridge the gap between data and model behavior. This open source neural simulation software package provides an expandable framework incorporating features such as ease of use, scalabiltiy, an XML based schema, and multiple levels of granularity within a modern object oriented programming design. KInNeSS is best suited to simulate networks of hundreds to thousands of branched multu-compartmental neurons with biophysical properties such as membrane potential, voltage-gated and ligand-gated channels, the presence of gap junctions of ionic diffusion, neuromodulation channel gating, the mechanism for habituative or depressive synapses, axonal delays, and synaptic plasticity. KInNeSS outputs include compartment membrane voltage, spikes, local-field potentials, and current source densities, as well as visualization of the behavior of a simulated agent. An explanation of the modeling philosophy and plug-in development is also presented. Further developement of KInNeSS is ongoing with the ultimate goal of creating a modular framework that will help researchers across different disciplines to effecitively collaborate using a modern neural simulation platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Making use of very detailed neurophysiological, anatomical, and behavioral data to build biologically-realistic computational models of animal behavior is often a difficult task. Until recently, many software packages have tried to resolve this mismatched granularity with different approaches. This paper presents KInNeSS, the KDE Integrated NeuroSimulation Software environment, as an alternative solution to bridge the gap between data and model behavior. This open source neural simulation software package provides an expandable framework incorporating features such as ease of use, scalability, an XML based schema, and multiple levels of granularity within a modern object oriented programming design. KInNeSS is best suited to simulate networks of hundreds to thousands of branched multi-compartmental neurons with biophysical properties such as membrane potential, voltage-gated and ligand-gated channels, the presence of gap junctions or ionic diffusion, neuromodulation channel gating, the mechanism for habituative or depressive synapses, axonal delays, and synaptic plasticity. KInNeSS outputs include compartment membrane voltage, spikes, local-field potentials, and current source densities, as well as visualization of the behavior of a simulated agent. An explanation of the modeling philosophy and plug-in development is also presented. Further development of KInNeSS is ongoing with the ultimate goal of creating a modular framework that will help researchers across different disciplines to effectively collaborate using a modern neural simulation platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Portfolio is about the changes that can be supported and achieved through transformational education that impacts on personal, professional and organisational levels. Having lived through an era of tremendous change over the second half of the twentieth century and into the twenty-first the author has a great drawing board to contemplate in the context of professional career experience as an engineer. The ability to engage in ‘subject-object’ separation is the means by which Kegan (1994, 2009) explains that transformation takes place and the Essays in this Portfolio aim to support and bring about such change. Exploration of aspects of ‘Kerry’ is the material selected to both challenge support change in the way of knowing from being subject to certain information and knowledge that to being able to consider it more objectively. The task of being able to distance judgement about the economy and economic development of Kerry was facilitated by various readings around of a number of key thinkers including Kegan, Drucker, Porter and Penrose. The central themes of Kerry or the potential for economic development are built into each Essay. Essay One focuses on reflections of Kerry life - on Kerry people within and without Kerry - and events as they affected understandings of how people related to and worked with one another. These reflections formed the basis for transformational goals identified which required a shift from an engineering mindset to encompass an economics-based view. In Essay Two knowledge of economic concepts is developed by exploring the writings of Drucker, Penrose, and Porter with pertinence to considering economic development generally, and for Kerry in particular in the form of an ‘entrepreneurial platform’. The concepts and theories were the basis of explorations presented in Essays Three and Four. Essay Three focuses on Kerry’s potential for economic development give its current economic profile and includes results from interviews with selected businesses. Essay Four is an exercise in the application of Porter’s ‘Cluster’ concept to the equine sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real time monitoring of oxygenation and respiration is on the cutting edge of bioanalysis, including studies of cell metabolism, bioenergetics, mitochondrial function and drug toxicity. This thesis presents the development and evaluation of new luminescent probes and techniques for intracellular O2 sensing and imaging. A new oxygen consumption rate (OCR) platform based on the commercial microfluidic perfusion channel μ-slides compatible with extra- and intracellular O2 sensitive probes, different cell lines and measurement conditions was developed. The design of semi-closed channels allowed cell treatments, multiplexing with other assays and two-fold higher sensitivity to compare with microtiter plate. We compared three common OCR platforms: hermetically sealed quartz cuvettes for absolute OCRs, partially sealed with mineral oil 96-WPs for relative OCRs, and open 96-WPs for local cell oxygenation. Both 96-WP platforms were calibrated against absolute OCR platform with MEF cell line, phosphorescent O2 probe MitoXpress-Intra and time-resolved fluorescence reader. Found correlations allow tracing of cell respiration over time in a high throughput format with the possibility of cell stimulation and of changing measurement conditions. A new multimodal intracellular O2 probe, based on the phosphorescent reporter dye PtTFPP, fluorescent FRET donor and two-photon antennae PFO and cationic nanoparticles RL-100 was described. This probe, called MM2, possesses high brightness, photo- and chemical stability, low toxicity, efficient cell staining and high-resolution intracellular O2 imaging with 2D and 3D cell cultures in intensity, ratiometric and lifetime-based modalities with luminescence readers and FLIM microscopes. Extended range of O2 sensitive probes was designed and studied in order to optimize their spectral characteristics and intracellular targeting, using different NPs materials, delivery vectors, ratiometric pairs and IR dyes. The presented improvements provide useful tool for high sensitive monitoring and imaging of intracellular O2 in different measurement formats with wide range of physiological applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Docetaxel is an active agent in the treatment of metastatic breast cancer. We evaluated the feasibility of docetaxel-based sequential and combination regimens as adjuvant therapies for patients with node-positive breast cancer. PATIENTS AND METHODS: Three consecutive groups of patients with node-positive breast cancer or locally-advanced disease, aged < or = 70 years, received one of the following regimens: a) sequential A-->T-->CMF: doxorubicin 75 mg/m2 q 3 weeks x 3, followed by docetaxel 100 mg/m2 q 3 weeks x 3, followed by i.v. CMF days 1 + 8 q 4 weeks x 3; b) sequential accelerated A-->T-->CMF: A and T were administered at the same doses q 2 weeks; c) combination therapy: doxorubicin 50 mg/m2 + docetaxel 75 mg/m2 q 3 weeks x 4, followed by CMF x 4. When indicated, radiotherapy was administered during or after CMF, and tamoxifen started after the end of CMF. RESULTS: Seventy-nine patients have been treated. Median age was 48 years. A 30% rate of early treatment discontinuation was observed in patients receiving the sequential accelerated therapy (23% during A-->T), due principally to severe skin toxicity. Median relative dose-intensity was 100% in the three treatment arms. The incidence of G3-G4 major toxicities by treated patients, was as follows: skin toxicity a: 5%; b: 27%; c: 0%; stomatitis a: 20%; b: 20%; c: 3%. The incidence of neutropenic fever was a: 30%; b: 13%; c: 48%. After a median follow-up of 18 months, no late toxicity has been reported. CONCLUSIONS: The accelerated sequential A-->T-->CMF treatment is not feasible due to an excess of skin toxicity. The sequential non accelerated and the combination regimens are feasible and under evaluation in a phase III trial of adjuvant therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Luminescent semiconductor nanocrystals, also known as quantum dots (QDs), have advanced the fields of molecular diagnostics and nanotherapeutics. Much of the initial progress for QDs in biology and medicine has focused on developing new biosensing formats to push the limit of detection sensitivity. Nevertheless, QDs can be more than passive bio-probes or labels for biological imaging and cellular studies. The high surface-to-volume ratio of QDs enables the construction of a "smart" multifunctional nanoplatform, where the QDs serve not only as an imaging agent but also a nanoscaffold catering for therapeutic and diagnostic (theranostic) modalities. This mini review highlights the emerging applications of functionalized QDs as fluorescence contrast agents for imaging or as nanoscale vehicles for delivery of therapeutics, with special attention paid to the promise and challenges towards QD-based theranostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standing and walking generate information about friction underfoot. Five experiments examined whether walkers use such perceptual information for prospective control of locomotion. In particular, do walkers integrate information about friction underfoot with visual cues for sloping ground ahead to make adaptive locomotor decisions? Participants stood on low-, medium-, and high-friction surfaces on a flat platform and made perceptual judgments for possibilities for locomotion over upcoming slopes. Perceptual judgments did not match locomotor abilities: Participants tended to overestimate their abilities on low-friction slopes and underestimate on high-friction slopes (Experiments 1-4). Accuracy improved only for judgments made while participants were in direct contact with the slope (Experiment 5), highlighting the difficulty of incorporating information about friction underfoot into a plan for future actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes work towards the deployment of flexible self-management into real-time embedded systems. A challenging project which focuses specifically on the development of a dynamic, adaptive automotive middleware is described, and the specific self-management requirements of this project are discussed. These requirements have been identified through the refinement of a wide-ranging set of use cases requiring context-sensitive behaviours. A sample of these use-cases is presented to illustrate the extent of the demands for self-management. The strategy that has been adopted to achieve self-management, based on the use of policies is presented. The embedded and real-time nature of the target system brings the constraints that dynamic adaptation capabilities must not require changes to the run-time code (except during hot update of complete binary modules), adaptation decisions must have low latency, and because the target platforms are resource-constrained the self-management mechanism have low resource requirements (especially in terms of processing and memory). Policy-based computing is thus and ideal candidate for achieving the self-management because the policy itself is loaded at run-time and can be replaced or changed in the future in the same way that a data file is loaded. Policies represent a relatively low complexity and low risk means of achieving self-management, with low run-time costs. Policies can be stored internally in ROM (such as default policies) as well as externally to the system. The architecture of a designed-for-purpose powerful yet lightweight policy library is described. A suitable evaluation platform, supporting the whole life-cycle of feasibility analysis, concept evaluation, development, rigorous testing and behavioural validation has been devised and is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tells the story of how a set of university lectures developed during the last six years. The idea is to show how (1) content, (2) communication and (3) assessment have evolved in steps which are named “generations of web learning”. The reader is offered a stepwise description of both didactic foundations of university lectures and practical implementation on a widely available web platform. The relative weight of directive elements has gradually decreased through the “three generations”, whereas characteristics of self-responsibility and self-guided learning have gained in importance. -Content was in early times presented and expected to be learned but in later phases expected to be constructed for examples of case studies. -Communication meant in early phases to deliver assignments to the lecturer but later on to form teams, exchange standpoints and review mutually. -Assessment initially consisted in marks invented and added up by the lecturer but was later enriched by peer review, mutual grading and voting procedures. How much “added value” can the web provide for teaching, training and learning? Six years of experience suggest: mainly insofar as new (collaborative and selfdirected) didactic scenarios are implemented! (DIPF/Orig.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with several of the most important aspects of Competence-Based Learning (CBL): course authoring, assignments, and categorization of learning content. The latter is part of the so-called Bologna Process (BP) and can effectively be supported by integrating knowledge resources like, e.g., standardized skill and competence taxonomies into the target implementation approach, aiming at making effective use of an open integration architecture while fostering the interoperability of hybrid knowledge-based e-learning solutions. Modern scenarios ask for interoperable software solutions to seamlessly integrate existing e-learning infrastructures and legacy tools with innovative technologies while being cognitively efficient to handle. In this way, prospective users are enabled to use them without learning overheads. At the same time, methods of Learning Design (LD) in combination with CBL are getting more and more important for production and maintenance of easy to facilitate solutions. We present our approach of developing a competence-based course-authoring and assignment support software. It is bridging the gaps between contemporary Learning Management Systems (LMS) and established legacy learning infrastructures by embedding existing resources via Learning Tools Interoperability (LTI). Furthermore, the underlying conceptual architecture for this integration approach will be explained. In addition, a competence management structure based on knowledge technologies supporting standardized skill and competence taxonomies will be introduced. The overall goal is to develop a software solution which will not only flawlessly merge into a legacy platform and several other learning environments, but also remain intuitively usable. As a proof of concept, the so-called platform independent conceptual architecture model will be validated by a concrete use case scenario.