969 resultados para pattern matching protocols
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.
Resumo:
This article describes a. neural pattern generator based on a cooperative-competitive feedback neural network. The two-channel version of the generator supports both in-phase and anti-phase oscillations. A scalar arousal level controls both the oscillation phase and frequency. As arousal increases, oscillation frequency increases and bifurcations from in-phase to anti-phase, or anti-phase to in-phase oscillations can occur. Coupled versions of the model exhibit oscillatory patterns which correspond to the gaits used in locomotion and other oscillatory movements by various animals.
Resumo:
This paper describes the design of a self~organizing, hierarchical neural network model of unsupervised serial learning. The model learns to recognize, store, and recall sequences of unitized patterns, using either short-term memory (STM) or both STM and long-term memory (LTM) mechanisms. Timing information is learned and recall {both from STM and from LTM) is performed with a learned rhythmical structure. The network, bearing similarities with ART (Carpenter & Grossberg 1987a), learns to map temporal sequences to unitized patterns, which makes it suitable for hierarchical operation. It is therefore capable of self-organizing codes for sequences of sequences. The capacity is only limited by the number of nodes provided. Selected simulation results are reported to illustrate system properties.
Resumo:
Embedded wireless sensor network (WSN) systems have been developed and used in a wide variety of applications such as local automatic environmental monitoring; medical applications analysing aspects of fitness and health energy metering and management in the built environment as well as traffic pattern analysis and control applications. While the purpose and functions of embedded wireless sensor networks have a myriad of applications and possibilities in the future, a particular implementation of these ambient sensors is in the area of wearable electronics incorporated into body area networks and everyday garments. Some of these systems will incorporate inertial sensing devices and other physical and physiological sensors with a particular focus on the application areas of athlete performance monitoring and e-health. Some of the important physical requirements for wearable antennas are that they are light-weight, small and robust and should also use materials that are compatible with a standard manufacturing process such as flexible polyimide or fr4 material where low cost consumer market oriented products are being produced. The substrate material is required to be low loss and flexible and often necessitates the use of thin dielectric and metallization layers. This paper describes the development of such a wearable, flexible antenna system for ISM band wearable wireless sensor networks. The material selected for the development of the wearable system in question is DE104i characterized by a dielectric constant of 3.8 and a loss tangent of 0.02. The antenna feed line is a 50 Ohm microstrip topology suitable for use with standard, high-performance and low-cost SMA-type RF connector technologies, widely used for these types of applications. The desired centre frequency is aimed at the 2.4GHz ISM band to be compatible with IEEE 802.15.4 Zigbee communication protocols and the Bluetooth standard which operate in this band.
Resumo:
My original contribution to knowledge is the creation of a WSN system that further improves the functionality of existing technology, whilst achieving improved power consumption and reliability. This thesis concerns the development of industrially applicable wireless sensor networks that are low-power, reliable and latency aware. This work aims to improve upon the state of the art in networking protocols for low-rate multi-hop wireless sensor networks. Presented is an application-driven co-design approach to the development of such a system. Starting with the physical layer, hardware was designed to meet industry specified requirements. The end system required further investigation of communications protocols that could achieve the derived application-level system performance specifications. A CSMA/TDMA hybrid MAC protocol was developed, leveraging numerous techniques from the literature and novel optimisations. It extends the current art with respect to power consumption for radio duty-cycled applications, and reliability, in dense wireless sensor networks, whilst respecting latency bounds. Specifically, it provides 100% packet delivery for 11 concurrent senders transmitting towards a single radio duty cycled sink-node. This is representative of an order of magnitude improvement over the comparable art, considering MAC-only mechanisms. A novel latency-aware routing protocol was developed to exploit the developed hardware and MAC protocol. It is based on a new weighted objective function with multiple fail safe mechanisms to ensure extremely high reliability and robustness. The system was empirically evaluated on two hardware platforms. These are the application-specific custom 868 MHz node and the de facto community-standard TelosB. Extensive empirical comparative performance analyses were conducted against the relevant art to demonstrate the advances made. The resultant system is capable of exceeding 10-year battery life, and exhibits reliability performance in excess of 99.9%.
Resumo:
The objective of this project was to prepare a range of 4-substituted 3-(2H)-furanones, and to investigate the relationship between their molecular structures and photoluminescence properties. The effects of substituents and conjugated linker unit were also investigated. After generation of the key 3(2H)-furanone heterocycle, extension of the conjugated framework at the C-4 position was achieved through Pd(0)-catalysed coupling reactions. Chapter one of the thesis comprises a review of the relavent literature and is split into three sections. These include information about the prevalence of 3-(2H)-furanones as natural products and synthetic routes to 3-(2H)-furanones in general. The synthetic routes are divided according to the synthetic precursor employed. The final section of chapter one outlines the fundamental principles and application of photoluminescence to organic compounds in general. Chapter two contains the results of the research achieved in the course of this work and a discussion of the findings. Two routes were successfully employed to generate 4-unsubstituted 3-(2H)-furanone moieties: (i) base induced cyclisation of hydroxyenones and (ii) isoxazole chemistry. A number of methods which proved ineffective in the production of furanones with the desired substitution pattern are also detailed. The majority of this study was focused on the introduction of substituents at the C-4 position of the 3-(2H)-furanone ring. This was achieved through the use of Sonogashira and Suzuki cross coupling protocols for Pd(0) catalysed C-C bond formation. The further functionalisation of some compounds was performed using transfer hydrogenation and “click chemistry” methodologies. Finally, the photophysical properties of 3-(2H)-furanones prepared in this project are discussed and the effect of substitution patterns in a complementary “push push” and “push pull” manner have also been investigated. All the experimental data and details of the synthetic methods employed, for the compounds prepared during the course of this research is contained in chapter three together with the spectroscopic and analytical properties of the compounds prepared.
Resumo:
Timing-related defects are major contributors to test escapes and in-field reliability problems for very-deep submicrometer integrated circuits. Small delay variations induced by crosstalk, process variations, power-supply noise, as well as resistive opens and shorts can potentially cause timing failures in a design, thereby leading to quality and reliability concerns. We present a test-grading technique that uses the method of output deviations for screening small-delay defects (SDDs). A new gate-delay defect probability measure is defined to model delay variations for nanometer technologies. The proposed technique intelligently selects the best set of patterns for SDD detection from an n-detect pattern set generated using timing-unaware automatic test-pattern generation (ATPG). It offers significantly lower computational complexity and excites a larger number of long paths compared to a current generation commercial timing-aware ATPG tool. Our results also show that, for the same pattern count, the selected patterns provide more effective coverage ramp-up than timing-aware ATPG and a recent pattern-selection method for random SDDs potentially caused by resistive shorts, resistive opens, and process variations. © 2010 IEEE.
Resumo:
Using data on user attributes and interactions from an online dating site, we estimate mate preferences, and use the Gale-Shapley algorithm to predict stable matches. The predicted matches are similar to the actual matches achieved by the dating site, and the actual matches are approximately efficient. Out-of-sample predictions of offline matches, i.e., marriages, exhibit assortative mating patterns similar to those observed in actual marriages. Thus, mate preferences, without resort to search frictions, can generate sorting in marriages. However, we underpredict some of the correlation patterns; search frictions may play a role in explaining the discrepancy.
Resumo:
The design of the New York City (NYC) high school match involved trade-offs among efficiency, stability, and strategy-proofness that raise new theoretical questions. We analyze a model with indifferences-ties-in school preferences. Simulations with field data and the theory favor breaking indifferences the same way at every school-single tiebreaking-in a student-proposing deferred acceptance mechanism. Any inefficiency associated with a realized tiebreaking cannot be removed without harming student incentives. Finally, we empirically document the extent of potential efficiency loss associated with strategy-proofness and stability, and direct attention to some open questions. (JEL C78, D82, I21).
Resumo:
Measuring the entorhinal cortex (ERC) is challenging due to lateral border discrimination from the perirhinal cortex. From a sample of 39 nondemented older adults who completed volumetric image scans and verbal memory indices, we examined reliability and validity concerns for three ERC protocols with different lateral boundary guidelines (i.e., Goncharova, Dickerson, Stoub, & deToledo-Morrell, 2001; Honeycutt et al., 1998; Insausti et al., 1998). We used three novice raters to assess inter-rater reliability on a subset of scans (216 total ERCs), with the entire dataset measured by one rater with strong intra-rater reliability on each technique (234 total ERCs). We found moderate to strong inter-rater reliability for two techniques with consistent ERC lateral boundary endpoints (Goncharova, Honeycutt), with negligible to moderate reliability for the technique requiring consideration of collateral sulcal depth (Insausti). Left ERC and story memory associations were moderate and positive for two techniques designed to exclude the perirhinal cortex (Insausti, Goncharova), with the Insausti technique continuing to explain 10% of memory score variance after additionally controlling for depression symptom severity. Right ERC-story memory associations were nonexistent after excluding an outlier. Researchers are encouraged to consider challenges of rater training for ERC techniques and how lateral boundary endpoints may impact structure-function associations.
Resumo:
While genome-wide gene expression data are generated at an increasing rate, the repertoire of approaches for pattern discovery in these data is still limited. Identifying subtle patterns of interest in large amounts of data (tens of thousands of profiles) associated with a certain level of noise remains a challenge. A microarray time series was recently generated to study the transcriptional program of the mouse segmentation clock, a biological oscillator associated with the periodic formation of the segments of the body axis. A method related to Fourier analysis, the Lomb-Scargle periodogram, was used to detect periodic profiles in the dataset, leading to the identification of a novel set of cyclic genes associated with the segmentation clock. Here, we applied to the same microarray time series dataset four distinct mathematical methods to identify significant patterns in gene expression profiles. These methods are called: Phase consistency, Address reduction, Cyclohedron test and Stable persistence, and are based on different conceptual frameworks that are either hypothesis- or data-driven. Some of the methods, unlike Fourier transforms, are not dependent on the assumption of periodicity of the pattern of interest. Remarkably, these methods identified blindly the expression profiles of known cyclic genes as the most significant patterns in the dataset. Many candidate genes predicted by more than one approach appeared to be true positive cyclic genes and will be of particular interest for future research. In addition, these methods predicted novel candidate cyclic genes that were consistent with previous biological knowledge and experimental validation in mouse embryos. Our results demonstrate the utility of these novel pattern detection strategies, notably for detection of periodic profiles, and suggest that combining several distinct mathematical approaches to analyze microarray datasets is a valuable strategy for identifying genes that exhibit novel, interesting transcriptional patterns.
Resumo:
Regular landscape patterning arises from spatially-dependent feedbacks, and can undergo catastrophic loss in response to changing landscape drivers. The central Everglades (Florida, USA) historically exhibited regular, linear, flow-parallel orientation of high-elevation sawgrass ridges and low-elevation sloughs that has degraded due to hydrologic modification. In this study, we use a meta-ecosystem approach to model a mechanism for the establishment, persistence, and loss of this landscape. The discharge competence (or self-organizing canal) hypothesis assumes non-linear relationships between peat accretion and water depth, and describes flow-dependent feedbacks of microtopography on water depth. Closed-form model solutions demonstrate that 1) this mechanism can produce spontaneous divergence of local elevation; 2) divergent and homogenous states can exhibit global bi-stability; and 3) feedbacks that produce divergence act anisotropically. Thus, discharge competence and non-linear peat accretion dynamics may explain the establishment, persistence, and loss of landscape pattern, even in the absence of other spatial feedbacks. Our model provides specific, testable predictions that may allow discrimination between the self-organizing canal hypotheses and competing explanations. The potential for global bi-stability suggested by our model suggests that hydrologic restoration may not re-initiate spontaneous pattern establishment, particularly where distinct soil elevation modes have been lost. As a result, we recommend that management efforts should prioritize maintenance of historic hydroperiods in areas of conserved pattern over restoration of hydrologic regimes in degraded regions. This study illustrates the value of simple meta-ecosystem models for investigation of spatial processes.
Resumo:
The Feeding Experiments End-user Database (FEED) is a research tool developed by the Mammalian Feeding Working Group at the National Evolutionary Synthesis Center that permits synthetic, evolutionary analyses of the physiology of mammalian feeding. The tasks of the Working Group are to compile physiologic data sets into a uniform digital format stored at a central source, develop a standardized terminology for describing and organizing the data, and carry out a set of novel analyses using FEED. FEED contains raw physiologic data linked to extensive metadata. It serves as an archive for a large number of existing data sets and a repository for future data sets. The metadata are stored as text and images that describe experimental protocols, research subjects, and anatomical information. The metadata incorporate controlled vocabularies to allow consistent use of the terms used to describe and organize the physiologic data. The planned analyses address long-standing questions concerning the phylogenetic distribution of phenotypes involving muscle anatomy and feeding physiology among mammals, the presence and nature of motor pattern conservation in the mammalian feeding muscles, and the extent to which suckling constrains the evolution of feeding behavior in adult mammals. We expect FEED to be a growing digital archive that will facilitate new research into understanding the evolution of feeding anatomy.
Resumo:
The Dietary Approaches to Stop Hypertension (DASH) trial showed that a diet rich in fruits, vegetables, low-fat dairy products with reduced total and saturated fat, cholesterol, and sugar-sweetened products effectively lowers blood pressure in individuals with prehypertension and stage I hypertension. Limited evidence is available on the safety and efficacy of the DASH eating pattern in special patient populations that were excluded from the trial. Caution should be exercised before initiating the DASH diet in patients with chronic kidney disease, chronic liver disease, and those who are prescribed renin-angiotensin-aldosterone system antagonist, but these conditions are not strict contraindications to DASH. Modifications to the DASH diet may be necessary to facilitate its use in patients with chronic heart failure, uncontrolled diabetes mellitus type II, lactose intolerance, and celiac disease. In general, the DASH diet can be adopted by most patient populations and initiated simultaneously with medication therapy and other lifestyle interventions.