936 resultados para adaptive resonance theory
Resumo:
Herbert Simon, a korlátozott racionalitás elméletének megalkotója szerint a döntéshozatalunk hatékonyságát az határozza meg, hogy korlátozott kognitív kapacitásaink birtokában milyen stratégiákkal birkózunk meg a komplex környezet kihívásaival. Az erre az elméletre építő kutatások egyik alapvetése, hogy az egyéni problémamegoldási folyamat helyzetspecifikus és ez az idomulás kulcsfontosságú az eredményességben és hatékonyságban. A döntéshozó rendelkezik egy „adaptív szerszámosládával”, amelyből a megfelelő helyzetekben a megfelelő döntési eljárásokat választja. A tanulmányban a szerző egy kvalitatív kutatás eredményeire építve, a beszállítóválasztás példáján keresztül mutat be lehetséges válaszokat a keveset kutatott kérdésre: hogyan működik az adaptivitás folyamata? A tanulmány a döntési helyzethez való alkalmazkodás kialakulását vizsgálja a döntési folyamatok kognitív szintjén. ______ Herbert Simon, the author of theory of bounded rationality claimed that the results of our decision-making is defined by the approprietness of strategies with which we handle the complexity of the environment with our bounded cognitive capacities. One of the main issues or research programs building on this theory is that problem solving is situation-specific, and the adjustment of strategies to actual situational factors is crucial for the effectiveness and efficiency of decision-making. The decision-maker possesses an „adaptive toolbox”, from which he chooses the right decision tools in the right situations. The author, based on the findnings of a qualitative study, presents possible answers to the not well-elaborated question: how does the process of adaptivity work? Forming of an adaptive mechanism is in the focus.
Resumo:
Ez a tanulmány a projektvezetési szakirodalomban kialakult ismeretanyagot szem előtt tartva (noha tételesen nem hivatkozva arra) tárja fel azt a sajátos és tipikusnak nevezhető kontextust, amelyben a projektalapú szervezetek projektmarketing tevékenysége megnyilvánul. A tanulmány célja tehát nem magának a projektmarketingnek a kérdéskörére irányul, hanem elsősorban annak projektspecifikus kontextusára. Jellegét illetően a tanulmány spekulatív jellegű, vagyis lényegét tekintve nem empirikus kutatási eredményekből levont következtetésekre épül. _____ The author analyses the cognitive level of individual decisions by placing the adaptive decision-maker in the centre of interest. The main question is how do adaptive processes evolve and what factors determine the adaptive mechanism. The author builds on his own qualitative study conducted with the Grounded Theory Methodology in the SME sector. The supplier selection decision is chosen from the wide range of business decisions. From the research results the two elements of the adaptive mechanism – the metastructure and the attitude set –, the process of their evolution and the factors determining this process are presented. The findings here are a middle-range theory, which can be elaborated further, but they provide some interesting insights already.
Resumo:
This dissertation develops a process improvement method for service operations based on the Theory of Constraints (TOC), a management philosophy that has been shown to be effective in manufacturing for decreasing WIP and improving throughput. While TOC has enjoyed much attention and success in the manufacturing arena, its application to services in general has been limited. The contribution to industry and knowledge is a method for improving global performance measures based on TOC principles. The method proposed in this dissertation will be tested using discrete event simulation based on the scenario of the service factory of airline turnaround operations. To evaluate the method, a simulation model of aircraft turn operations of a U.S. based carrier was made and validated using actual data from airline operations. The model was then adjusted to reflect an application of the Theory of Constraints for determining how to deploy the scarce resource of ramp workers. The results indicate that, given slight modifications to TOC terminology and the development of a method for constraint identification, the Theory of Constraints can be applied with success to services. Bottlenecks in services must be defined as those processes for which the process rates and amount of work remaining are such that completing the process will not be possible without an increase in the process rate. The bottleneck ratio is used to determine to what degree a process is a constraint. Simulation results also suggest that redefining performance measures to reflect a global business perspective of reducing costs related to specific flights versus the operational local optimum approach of turning all aircraft quickly results in significant savings to the company. Savings to the annual operating costs of the airline were simulated to equal 30% of possible current expenses for misconnecting passengers with a modest increase in utilization of the workers through a more efficient heuristic of deploying them to the highest priority tasks. This dissertation contributes to the literature on service operations by describing a dynamic, adaptive dispatch approach to manage service factory operations similar to airline turnaround operations using the management philosophy of the Theory of Constraints.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
This letter presents novel behaviour-based tracking of people in low-resolution using instantaneous priors mediated by head-pose. We extend the Kalman Filter to adaptively combine motion information with an instantaneous prior belief about where the person will go based on where they are currently looking. We apply this new method to pedestrian surveillance, using automatically-derived head pose estimates, although the theory is not limited to head-pose priors. We perform a statistical analysis of pedestrian gazing behaviour and demonstrate tracking performance on a set of simulated and real pedestrian observations. We show that by using instantaneous `intentional' priors our algorithm significantly outperforms a standard Kalman Filter on comprehensive test data.
Resumo:
Preparedness for disaster scenarios is progressively becoming an educational agenda for governments because of diversifying risks and threats worldwide. In disaster-prone Japan, disaster preparedness has been a prioritised national agenda, and preparedness education has been undertaken in both formal schooling and lifelong learning settings. This article examines the politics behind one prevailing policy discourse in the field of disaster preparedness referred to as ‘the four forms of aid’ – ‘kojo [public aid]’, ‘jijo [self-help]’, ‘gojo/kyojo [mutual aid]’. The study looks at the Japanese case, however, the significance is global, given that neo-liberal governments are increasingly having to deal with a range of disaster situations whether floods or terrorism, while implementing austerity measures. Drawing on the theory of the adaptiveness of neo-liberalism, the article sheds light on the hybridity of the current Abe government’s politics: a ‘dominant’ neo-liberal economic approach – public aid and self-help – and a ‘subordinate’ moral conservative agenda – mutual aid. It is argued that the four forms of aid are an effective ‘balancing act’, and that kyojo in particular is a powerful legitimator in the hybrid politics. The article concludes that a lifelong and life-wide preparedness model could be developed in Japan which has taken a social approach to lifelong learning. © 2016 Informa UK Limited, trading as Taylor & Francis Group
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
This thesis introduces the L1 Adaptive Control Toolbox, a set of tools implemented in Matlab that aid in the design process of an L1 adaptive controller and enable the user to construct simulations of the closed-loop system to verify its performance. Following a brief review of the existing theory on L1 adaptive controllers, the interface of the toolbox is presented, including a description of the functions accessible to the user. Two novel algorithms for determining the required sampling period of a piecewise constant adaptive law are presented and their implementation in the toolbox is discussed. The detailed description of the structure of the toolbox is provided as well as a discussion of the implementation of the creation of simulations. Finally, the graphical user interface is presented and described in detail, including the graphical design tools provided for the development of the filter C(s). The thesis closes with suggestions for further improvement of the toolbox.
Resumo:
The thesis presents experimental results, simulations, and theory on turbulence excited in magnetized plasmas near the ionosphere’s upper hybrid layer. The results include: The first experimental observations of super small striations (SSS) excited by the High-Frequency Auroral Research Project (HAARP) The first detection of high-frequency (HF) waves from the HAARP transmitter over a distance of 16x10^3 km The first simulations indicating that upper hybrid (UH) turbulence excites electron Bernstein waves associated with all nearby gyroharmonics Simulation results that indicate that the resulting bulk electron heating near the upper hybrid (UH) resonance is caused primarily by electron Bernstein waves parametrically excited near the first gyroharmonic. On the experimental side we present two sets of experiments performed at the HAARP heating facility in Alaska. In the first set of experiments, we present the first detection of super-small (cm scale) striations (SSS) at the HAARP facility. We detected density structures smaller than 30 cm for the first time through a combination of satellite and ground based measurements. In the second set of experiments, we present the results of a novel diagnostic implemented by the Ukrainian Antarctic Station (UAS) in Verdansky. The technique allowed the detection of the HAARP signal at a distance of nearly 16 Mm, and established that the HAARP signal was injected into the ionospheric waveguide by direct scattering off of dekameter-scale density structures induced by the heater. On the theoretical side, we present results of Vlasov simulations near the upper hybrid layer. These results are consistent with the bulk heating required by previous work on the theory of the formation of descending artificial ionospheric layers (DIALs), and with the new observations of DIALs at HAARP’s upgraded effective radiated power (ERP). The simulations that frequency sweeps, and demonstrate that the heating changes from a bulk heating between gyroharmonics, to a tail acceleration as the pump frequency is swept through the fourth gyroharmonic. These simulations are in good agreement with experiments. We also incorporate test particle simulations that isolate the effects of specific wave modes on heating, and we find important contributions from both electron Bernstein waves and upper hybrid waves, the former of which have not yet been detected by experiments, and have not been previously explored as a driver of heating. In presenting these results, we analyzed data from HAARP diagnostics and assisted in planning the second round of experiments. We integrated the data into a picture of experiments that demonstrated the detection of SSS, hysteresis effects in simulated electromagnetic emission (SEE) features, and the direct scattering of the HF pump into the ionospheric waveguide. We performed simulations and analyzed simulation data to build the understanding of collisionless heating near the upper hybrid layer, and we used these simulations to show that bulk electron heating at the upper hybrid layer is possible, which is required by current theories of DAIL formation. We wrote a test particle simulation to isolate the effects of electron Bernstein waves and upper hybrid layers on collisionless heating, and integrated this code to work with both the output of Vlasov simulations and the input for simulations of DAIL formation.
Resumo:
The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.
Resumo:
Aims and objectives. To explore the psychosocial needs of patients discharged from intensive care, the extent to which they are captured using existing theory on transitions in care and the potential role development of critical care outreach, follow-up and liaison services. Background. Intensive care patients are at an increased risk of adverse events, deterioration or death following ward transfer. Nurse-led critical care outreach, follow-up or liaison services have been adopted internationally to prevent these potentially avoidable sequelae. The need to provide patients with psychosocial support during the transition to ward-based care has also been identified, but the evidence base for role development is currently limited. Design and methods. Twenty participants were invited to discuss their experiences of ward-based care as part of a broader study on recovery following prolonged critical illness. Psychosocial distress was a prominent feature of their accounts, prompting secondary data analysis using Meleis et al.’s mid-range theory on experiencing transitions. Results. Participants described a sense of disconnection in relation to profound debilitation and dependency and were often distressed by a perceived lack of understanding, indifference or insensitivity among ward staff to their basic care needs. Negotiating the transition between dependence and independence was identified as a significant source of distress following ward transfer. Participants varied in the extent to which they were able to express their needs and negotiate recovery within professionally mediated boundaries. Conclusion. These data provide new insights into the putative origins of the psychosocial distress that patients experience following ward transfer. Relevance to clinical practice. Meleis et al.’s work has resonance in terms of explicating intensive care patients’ experiences of psychosocial distress throughout the transition to general ward–based care, such that the future role development of critical care outreach, follow-up and liaison services may be more theoretically informed.
Resumo:
The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
As climate change continues to impact socio-ecological systems, tools that assist conservation managers to understand vulnerability and target adaptations are essential. Quantitative assessments of vulnerability are rare because available frameworks are complex and lack guidance for dealing with data limitations and integrating across scales and disciplines. This paper describes a semi-quantitative method for assessing vulnerability to climate change that integrates socio-ecological factors to address management objectives and support decision-making. The method applies a framework first adopted by the Intergovernmental Panel on Climate Change and uses a structured 10-step process. The scores for each framework element are normalized and multiplied to produce a vulnerability score and then the assessed components are ranked from high to low vulnerability. Sensitivity analyses determine which indicators most influence the analysis and the resultant decision-making process so data quality for these indicators can be reviewed to increase robustness. Prioritisation of components for conservation considers other economic, social and cultural values with vulnerability rankings to target actions that reduce vulnerability to climate change by decreasing exposure or sensitivity and/or increasing adaptive capacity. This framework provides practical decision-support and has been applied to marine ecosystems and fisheries, with two case applications provided as examples: (1) food security in Pacific Island nations under climate-driven fish declines, and (2) fisheries in the Gulf of Carpentaria, northern Australia. The step-wise process outlined here is broadly applicable and can be undertaken with minimal resources using existing data, thereby having great potential to inform adaptive natural resource management in diverse locations.
Resumo:
215 p.