993 resultados para adaptive technologies


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Fuzzy ART model capable of rapid stable learning of recognition categories in response to arbitrary sequences of analog or binary input patterns is described. Fuzzy ART incorporates computations from fuzzy set theory into the ART 1 neural network, which learns to categorize only binary input patterns. The generalization to learning both analog and binary input patterns is achieved by replacing appearances of the intersection operator (n) in AHT 1 by the MIN operator (Λ) of fuzzy set theory. The MIN operator reduces to the intersection operator in the binary case. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy set theory play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Learning stops when the input space is covered by boxes. With fast learning and a finite input set of arbitrary size and composition, learning stabilizes after just one presentation of each input pattern. A fast-commit slow-recode option combines fast learning with a forgetting rule that buffers system memory against noise. Using this option, rare events can be rapidly learned, yet previously learned memories are not rapidly erased in response to statistically unreliable input fluctuations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new class of predictive ART architectures, called Adaptive Resonance Associative Map (ARAM) which performs rapid, yet stable heteroassociative learning in real time environment. ARAM can be visualized as two ART modules sharing a single recognition code layer. The unit for recruiting a recognition code is a pattern pair. Code stabilization is ensured by restricting coding to states where resonances are reached in both modules. Simulation results have shown that ARAM is capable of self-stabilizing association of arbitrary pattern pairs of arbitrary complexity appearing in arbitrary sequence by fast learning in real time environment. Due to the symmetrical network structure, associative recall can be performed in both directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advanced sensory systems address a number of major obstacles towards the provision for cost effective and proactive rehabilitation. Many of these systems employ technologies such as high-speed video or motion capture to generate quantitative measurements. However these solutions are accompanied by some major limitations including extensive set-up and calibration, restriction to indoor use, high cost and time consuming data analysis. Additionally many do not quantify improvement in a rigorous manner for example gait analysis for 5 minutes as opposed to 24 hour ambulatory monitoring. This work addresses these limitations using low cost, wearable wireless inertial measurement as a mobile and minimal infrastructure alternative. In cooperation with healthcare professionals the goal is to design and implement a reconfigurable and intelligent movement capture system. A key component of this work is an extensive benchmark comparison with the 'gold standard' VICON motion capture system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A comparison study was carried out between a wireless sensor node with a bare die flip-chip mounted and its reference board with a BGA packaged transceiver chip. The main focus is the return loss (S parameter S11) at the antenna connector, which was highly depended on the impedance mismatch. Modeling including the different interconnect technologies, substrate properties and passive components, was performed to simulate the system in Ansoft Designer software. Statistical methods, such as the use of standard derivation and regression, were applied to the RF performance analysis, to see the impacts of the different parameters on the return loss. Extreme value search, following on the previous analysis, can provide the parameters' values for the minimum return loss. Measurements fit the analysis and simulation well and showed a great improvement of the return loss from -5dB to -25dB for the target wireless sensor node.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Can my immediate physical environment affect how I feel? The instinctive answer to this question must be a resounding “yes”. What might seem a throwaway remark is increasingly borne out by research in environmental and behavioural psychology, and in the more recent discipline of Evidence-Based Design. Research outcomes are beginning to converge with findings in neuroscience and neurophysiology, as we discover more about how the human brain and body functions, and reacts to environmental stimuli. What we see, hear, touch, and sense affects each of us psychologically and, by extension, physically, on a continual basis. The physical characteristics of our daily environment thus have the capacity to profoundly affect all aspects of our functioning, from biological systems to cognitive ability. This has long been understood on an intuitive basis, and utilised on a more conscious basis by architects and other designers. Recent research in evidence-based design, coupled with advances in neurophysiology, confirm what have been previously held as commonalities, but also illuminate an almost frightening potential to do enormous good, or alternatively, terrible harm, by virtue of how we make our everyday surroundings. The thesis adopts a design methodology in its approach to exploring the potential use of wireless sensor networks in environments for elderly people. Vitruvian principles of “commodity, firmness and delight” inform the research process and become embedded in the final design proposals and research conclusions. The issue of person-environment fit becomes a key principle in describing a model of continuously-evolving responsive architecture which makes the individual user its focus, with the intention of promoting wellbeing. The key research questions are: What are the key system characteristics of an adaptive therapeutic single-room environment? How can embedded technologies be utilised to maximise the adaptive and therapeutic aspects of the personal life-space of an elderly person with dementia?.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drug delivery systems influence the various processes of release, absorption, distribution and elimination of drug. Conventional delivery methods administer drug through the mouth, the skin, transmucosal areas, inhalation or injection. However, one of the current challenges is the lack of effective and targeted oral drug administration. Development of sophisticated strategies, such as micro- and nanotechnology that can integrate the design and synthesis of drug delivery systems in a one-step, scalable process is fundamental in advancing the limitations of conventional processing techniques. Thus, the objective of this thesis is to evaluate novel microencapsulation technologies in the production of size-specific and target-specific drug-loaded particles. The first part of this thesis describes the utility of PDMS and silicon microfluidic flow focusing devices (MFFDs) to produce PLGA-based microparticles. The formation of uniform droplets was dependent on the surface of PDMS remaining hydrophilic. However, the durability of PDMS was limited to no more than 1 hour before wetting of the microchannel walls with dichloromethane and subsequent swelling occurred. Critically, silicon MFFDs revealed very good solvent compatibility and was sufficiently robust to withstand elevated fluid flow rates. Silicon MFFDs facilitated experiments to run over days with continuous use and re-use of the device with a narrower microparticle size distribution, relative to conventional production techniques. The second part of this thesis demonstrates an alternative microencapsulation technology, SmPill® minispheres, to target CsA delivery to the colon. Characterisation of CsA release in vitro and in vivo was performed. By modulating the ethylcellulose:pectin coating thickness, release of CsA in-vivo was more effectively controlled compared to current commercial CsA formulations and demonstrated a linear in-vitro in-vivo relationship. Coated minispheres were shown to limit CsA release in the upper small intestine and enhance localised CsA delivery to the colon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to widely use Ge and III-V materials instead of Si in advanced CMOS technology, the process and integration of these materials has to be well established so that their high mobility benefit is not swamped by imperfect manufacturing procedures. In this dissertation number of key bottlenecks in realization of Ge devices are investigated; We address the challenge of the formation of low resistivity contacts on n-type Ge, comparing conventional and advanced rapid thermal annealing (RTA) and laser thermal annealing (LTA) techniques respectively. LTA appears to be a feasible approach for realization of low resistivity contacts with an incredibly sharp germanide-substrate interface and contact resistivity in the order of 10 -7 Ω.cm2. Furthermore the influence of RTA and LTA on dopant activation and leakage current suppression in n+/p Ge junction were compared. Providing very high active carrier concentration > 1020 cm-3, LTA resulted in higher leakage current compared to RTA which provided lower carrier concentration ~1019 cm-3. This is an indication of a trade-off between high activation level and junction leakage current. High ION/IOFF ratio ~ 107 was obtained, which to the best of our knowledge is the best reported value for n-type Ge so far. Simulations were carried out to investigate how target sputtering, dose retention, and damage formation is generated in thin-body semiconductors by means of energetic ion impacts and how they are dependent on the target physical material properties. Solid phase epitaxy studies in wide and thin Ge fins confirmed the formation of twin boundary defects and random nucleation growth, like in Si, but here 600 °C annealing temperature was found to be effective to reduce these defects. Finally, a non-destructive doping technique was successfully implemented to dope Ge nanowires, where nanowire resistivity was reduced by 5 orders of magnitude using PH3 based in-diffusion process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This qualitative research expands understanding of how information about a range of Novel Food Technologies (NFTs) is used and assimilated, and the implications of this on the evolution of attitudes and acceptance. This work enhances theoretical and applied understanding of citizens’ evaluative processes around these technologies. The approach applied involved observations of interactive exchanges between citizens and information providers (i.e. food scientists), during which they discussed a specific technology. This flexible, yet structured, approach revealed how individuals construct meaning around information about specific NFTs. A rich dataset of 42 ‘deliberate discourse’ and 42 postdiscourse transcripts was collected. Data analysis encompassed three stages: an initial descriptive account of the complete dataset based on the top-down bottom-up (TDBU) model of attitude formation, followed by inductive and deductive thematic analysis across the selected technology groups. The hybrid thematic analysis undertaken identified a Conceptual Model, which represents a holistic perspective on the influences and associated features directing ‘sense-making’ and ultimate evaluations around the technology clusters. How individuals make sense of these technologies is shaped by: their beliefs, values and personal characteristics; their perceptions of power and control over the application of the technology; and, the assumed relevance of the technology and its applications within different contexts. These influences form the frame for the creation of sense-making around the technologies. Internal negotiations between these influences are evident and evaluations are based on the relative importance of each influence to the individual, which tend to contribute to attitude ambivalence and instability. The findings indicate the processes of forming and changing attitudes towards these technologies are: complex; dependent on characteristics of the individual, technology, application and product; and, impacted by the nature and forms of information provided. Challenges are faced in engaging with the public about these technologies, as levels of knowledge, understanding and interest vary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-perceived quality. Additionally, we propose a scheme that combines network coding and SDC to further improve the error resiliency. SDC yields upwards of 25% bandwidth savings over MDC. Additionally, our scheme features higher quality for longer durations even at high packet loss rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

info:eu-repo/semantics/published

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A set of 13 US based experts in post-combustion and oxy-fuel combustion CO2 capture systems responded to an extensive questionnaire asking their views on the present status and future expected performance and costs for amine-based, chilled ammonia, and oxy-combustion retrofits of coal-fired power plants. This paper presents the experts' responses for technology maturity, ideal plant characteristics for early adopters, and the extent to which R&D and deployment incentives will impact costs. It also presents the best estimates and 95% confidence limits of the energy penalties associated with amine-based systems. The results show a general consensus that amine-based systems are closer to commercial application, but potential for improving performance and lowering costs is limited; chilled ammonia and oxy-combustion offer greater potential for cost reductions, but not without greater uncertainty regarding scale and technical feasibility. © 2011 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.