876 resultados para Behavior-Based
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
T.Boongoen and Q. Shen. Semi-Supervised OWA Aggregation for Link-Based Similarity Evaluation and Alias Detection. Proceedings of the 18th International Conference on Fuzzy Systems (FUZZ-IEEE'09), pp. 288-293, 2009. Sponsorship: EPSRC
Resumo:
The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in flexible congestion control mechanisms. Window-based congestion control schemes use increase rules to probe available bandwidth, and decrease rules to back off when congestion is detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and packet loss rate. In this paper, we propose a novel window-based congestion control algorithm called SIMD (Square-Increase/Multiplicative-Decrease). Contrary to previous memory-less controls, SIMD utilizes history information in its control rules. It uses multiplicative decrease but the increase in window size is in proportion to the square of the time elapsed since the detection of the last loss event. Thus, SIMD can efficiently probe available bandwidth. Nevertheless, SIMD is TCP-friendly as well as TCP-compatible under RED, and it has much better convergence behavior than TCP-friendly AIMD and binomial algorithms proposed recently.
Resumo:
The increasing diversity of Internet application requirements has spurred recent interest in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The control rules are parameterized so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. This paper presents a comprehensive study of a new spectrum of window-based congestion controls, which are TCP-friendly as well as TCP-compatible under RED. Our controls utilize history information in their control rules. By doing so, they improve the transient behavior, compared to recently proposed slowly-responsive congestion controls such as general AIMD and binomial controls. Our controls can achieve better tradeoffs among smoothness, aggressiveness, and responsiveness, and they can achieve faster convergence. We demonstrate analytically and through extensive ns simulations the steady-state and transient behavior of several instances of this new spectrum.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.
Resumo:
Surface modification of silicon with organic monolayers tethered to the surface by different linkers is an important process in realizing future (opto-)electronic devices. Understanding the role played by the nature of the linking group and the chain length on the adsorption structures and electronic properties of these assemblies is vital to advance this technology. This Thesis is a study of such properties and contributes in particular to a microscopic understanding of induced changes in the work function of experimentally studied functionalized silicon surfaces. Using first-principles density functional theory (DFT), at the first step, we provide predictions for chemical trends in the work function of hydrogenated silicon (111) surfaces modified with various terminations. For nonpolar terminating atomic species such as F, Cl, Br, and I, the change in the work function is directly proportional to the amount of charge transferred from the surface, thus relating to the difference in electronegativity of the adsorbate and silicon atoms. The change is a monotonic function of coverage in this case, and the work function increases with increasing electronegativity. Polar species such as −TeH, −SeH, −SH, −OH, −NH2, −CH3, and −BH2 do not follow this trend due to the interaction of their dipole with the induced electric field at the surface. In this case, the magnitude and sign of the surface dipole moment need to be considered in addition to the bond dipole to generally describe the change in work function. Compared to hydrogenated surfaces, there is slight increase in the work function of H:Si(111)-XH, where X = Te, Se, and S, whereas reduction is observed for surfaces covered with −OH, −CH3, and −NH2. Next, we study the hydrogen passivated Si(111) surface modified with alkyl chains of the general formula H:Si–(CH2)n–CH2 and H:Si–X–(CH2)n–CH3, where X = NH, O, S and n = (0, 1, 3, 5, 7, 9, 11), at half coverage. For (X)–Hexyl and (X)–Dodecyl functionalization, we also examined various coverages up to full monolayer grafting in order to validate the result of half covered surface and the linker effect on the coverage. We find that it is necessary to take into account the van der Waals interaction between the alkyl chains. The strongest binding is for the oxygen linker, followed by S, N, and C, irrespective of chain length. The result revealed that the sequence of the stability is independent of coverage; however, linkers other than carbon can shift the optimum coverage considerably and allow further packing density. For all linkers apart from sulfur, structural properties, in particular, surface-linker-chain angles, saturate to a single value once n > 3. For sulfur, we identify three regimes, namely, n = 0–3, n = 5–7, and n = 9–11, each with its own characteristic adsorption structures. Where possible, our computational results are shown to be consistent with the available experimental data and show how the fundamental structural properties of modified Si surfaces can be controlled by the choice of linking group and chain length. Later we continue by examining the work function tuning of H:Si(111) over a range of 1.73 eV through adsorption of alkyl monolayers with general formula -[Xhead-group]-(CnH2n)-[Xtail-group], X = O(H), S(H), NH(2). The work function is practically converged at 4 carbons (8 for oxygen), for head-group functionalization. For tail-group functionalization and with both head- and tail-groups, there is an odd-even effect in the behavior of the work function, with peak-to-peak amplitudes of up to 1.7 eV in the oscillations. This behavior is explained through the orientation of the terminal-group's dipole. The shift in the work function is largest for NH2-linked and smallest for SH-linked chains and is rationalized in terms of interface dipoles. Our study reveals that the choice of the head- and/or tail-groups effectively changes the impact of the alkyl chain length on the work function tuning using self-assembled monolayers and this is an important advance in utilizing hybrid functionalized Si surfaces. Bringing together the understanding gained from studying single type functionalization of H:Si(111) with different alkyl chains and bearing in mind how to utilize head-group, tail-group or both as well as monolayer coverage, in the final part of this Thesis we study functionalized H:Si(111) with binary SAMs. Aiming at enhancing work function adjustment together with SAM stability and coverage we choose a range of terminations and linker-chains denoted as –X–(Alkyl) with X = CH3, O(H), S(H), NH(2) and investigate the stability and work function of various binary components grafted onto H:Si(111) surface. Using binary functionalization with -[NH(2)/O(H)/S(H)]-[Hexyl/Dodecyl] we show that work function can be tuned within the interval of 3.65-4.94 eV and furthermore, enhance the SAM’s stability. Although direct Si-C grafted SAMs are less favourable compared to their counterparts with O, N or S linkage, regardless of the ratio, binary functionalized alkyl monolayers with X-alkyl (X = NH, O) is always more stable than single type alkyl functionalization with the same coverage. Our results indicate that it is possible to go beyond the optimum coverage of pure alkyl functionalized SAMs (50%) by adding a linker with the correct choice of the linker. This is very important since dense packed monolayers have fewer defects and deliver higher efficiency. Our results indicate that binary anchoring can modify the charge injection and therefore bond stability while preserving the interface electronic structure.
Resumo:
BACKGROUND: Outcome assessment can support the therapeutic process by providing a way to track symptoms and functionality over time, providing insights to clinicians and patients, as well as offering a common language to discuss patient behavior/functioning. OBJECTIVES: In this article, we examine the patient-based outcome assessment (PBOA) instruments that have been used to determine outcomes in acupuncture clinical research and highlight measures that are feasible, practical, economical, reliable, valid, and responsive to clinical change. The aims of this review were to assess and identify the commonly available PBOA measures, describe a framework for identifying appropriate sets of measures, and address the challenges associated with these measures and acupuncture. Instruments were evaluated in terms of feasibility, practicality, economy, reliability, validity, and responsiveness to clinical change. METHODS: This study was a systematic review. A total of 582 abstracts were reviewed using PubMed (from inception through April 2009). RESULTS: A total of 582 citations were identified. After screening of title/abstract, 212 articles were excluded. From the remaining 370 citations, 258 manuscripts identified explicit PBOA; 112 abstracts did not include any PBOA. The five most common PBOA instruments identified were the Visual Analog Scale, Symptom Diary, Numerical Pain Rating Scales, SF-36, and depression scales such as the Beck Depression Inventory. CONCLUSIONS: The way a questionnaire or scale is administered can have an effect on the outcome. Also, developing and validating outcome measures can be costly and difficult. Therefore, reviewing the literature on existing measures before creating or modifying PBOA instruments can significantly reduce the burden of developing a new measure.
Resumo:
BACKGROUND: Outpatient palliative care, an evolving delivery model, seeks to improve continuity of care across settings and to increase access to services in hospice and palliative medicine (HPM). It can provide a critical bridge between inpatient palliative care and hospice, filling the gap in community-based supportive care for patients with advanced life-limiting illness. Low capacities for data collection and quantitative research in HPM have impeded assessment of the impact of outpatient palliative care. APPROACH: In North Carolina, a regional database for community-based palliative care has been created through a unique partnership between a HPM organization and academic medical center. This database flexibly uses information technology to collect patient data, entered at the point of care (e.g., home, inpatient hospice, assisted living facility, nursing home). HPM physicians and nurse practitioners collect data; data are transferred to an academic site that assists with analyses and data management. Reports to community-based sites, based on data they provide, create a better understanding of local care quality. CURRENT STATUS: The data system was developed and implemented over a 2-year period, starting with one community-based HPM site and expanding to four. Data collection methods were collaboratively created and refined. The database continues to grow. Analyses presented herein examine data from one site and encompass 2572 visits from 970 new patients, characterizing the population, symptom profiles, and change in symptoms after intervention. CONCLUSION: A collaborative regional approach to HPM data can support evaluation and improvement of palliative care quality at the local, aggregated, and statewide levels.
Resumo:
Externalizing behavior problems of 124 adolescents were assessed across Grades 7-11. In Grade 9, participants were also assessed across social-cognitive domains after imagining themselves as the object of provocations portrayed in six videotaped vignettes. Participants responded to vignette-based questions representing multiple processes of the response decision step of social information processing. Phase 1 of our investigation supported a two-factor model of the response evaluation process of response decision (response valuation and outcome expectancy). Phase 2 showed significant relations between the set of these response decision processes, as well as response selection, measured in Grade 9 and (a) externalizing behavior in Grade 9 and (b) externalizing behavior in Grades 10-11, even after controlling externalizing behavior in Grades 7-8. These findings suggest that on-line behavioral judgments about aggression play a crucial role in the maintenance and growth of aggressive response tendencies in adolescence.
Resumo:
Community-based management and the establishment of marine reserves have been advocated worldwide as means to overcome overexploitation of fisheries. Yet, researchers and managers are divided regarding the effectiveness of these measures. The "tragedy of the commons" model is often accepted as a universal paradigm, which assumes that unless managed by the State or privatized, common-pool resources are inevitably overexploited due to conflicts between the self-interest of individuals and the goals of a group as a whole. Under this paradigm, the emergence and maintenance of effective community-based efforts that include cooperative risky decisions as the establishment of marine reserves could not occur. In this paper, we question these assumptions and show that outcomes of commons dilemmas can be complex and scale-dependent. We studied the evolution and effectiveness of a community-based management effort to establish, monitor, and enforce a marine reserve network in the Gulf of California, Mexico. Our findings build on social and ecological research before (1997-2001), during (2002) and after (2003-2004) the establishment of marine reserves, which included participant observation in >100 fishing trips and meetings, interviews, as well as fishery dependent and independent monitoring. We found that locally crafted and enforced harvesting rules led to a rapid increase in resource abundance. Nevertheless, news about this increase spread quickly at a regional scale, resulting in poaching from outsiders and a subsequent rapid cascading effect on fishing resources and locally-designed rule compliance. We show that cooperation for management of common-pool fisheries, in which marine reserves form a core component of the system, can emerge, evolve rapidly, and be effective at a local scale even in recently organized fisheries. Stakeholder participation in monitoring, where there is a rapid feedback of the systems response, can play a key role in reinforcing cooperation. However, without cross-scale linkages with higher levels of governance, increase of local fishery stocks may attract outsiders who, if not restricted, will overharvest and threaten local governance. Fishers and fishing communities require incentives to maintain their management efforts. Rewarding local effective management with formal cross-scale governance recognition and support can generate these incentives.
Resumo:
BACKGROUND: Computer simulations are of increasing importance in modeling biological phenomena. Their purpose is to predict behavior and guide future experiments. The aim of this project is to model the early immune response to vaccination by an agent based immune response simulation that incorporates realistic biophysics and intracellular dynamics, and which is sufficiently flexible to accurately model the multi-scale nature and complexity of the immune system, while maintaining the high performance critical to scientific computing. RESULTS: The Multiscale Systems Immunology (MSI) simulation framework is an object-oriented, modular simulation framework written in C++ and Python. The software implements a modular design that allows for flexible configuration of components and initialization of parameters, thus allowing simulations to be run that model processes occurring over different temporal and spatial scales. CONCLUSION: MSI addresses the need for a flexible and high-performing agent based model of the immune system.
Resumo:
Humans make decisions in highly complex physical, economic and social environments. In order to adaptively choose, the human brain has to learn about- and attend to- sensory cues that provide information about the potential outcome of different courses of action. Here I present three event-related potential (ERP) studies, in which I evaluated the role of the interactions between attention and reward learning in economic decision-making. I focused my analyses on three ERP components (Chap. 1): (1) the N2pc, an early lateralized ERP response reflecting the lateralized focus of visual; (2) the feedback-related negativity (FRN), which reflects the process by which the brain extracts utility from feedback; and (3) the P300 (P3), which reflects the amount of attention devoted to feedback-processing. I found that learned stimulus-reward associations can influence the rapid allocation of attention (N2pc) towards outcome-predicting cues, and that differences in this attention allocation process are associated with individual differences in economic decision performance (Chap. 2). Such individual differences were also linked to differences in neural responses reflecting the amount of attention devoted to processing monetary outcomes (P3) (Chap. 3). Finally, the relative amount of attention devoted to processing rewards for oneself versus others (as reflected by the P3) predicted both charitable giving and self-reported engagement in real-life altruistic behaviors across individuals (Chap. 4). Overall, these findings indicate that attention and reward processing interact and can influence each other in the brain. Moreover, they indicate that individual differences in economic choice behavior are associated both with biases in the manner in which attention is drawn towards sensory cues that inform subsequent choices, and with biases in the way that attention is allocated to learn from the outcomes of recent choices.