982 resultados para DETERMINES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inferring types for polymorphic recursive function definitions (abbreviated to polymorphic recursion) is a recurring topic on the mailing lists of popular typed programming languages. This is despite the fact that type inference for polymorphic recursion using for all-types has been proved undecidable. This report presents several programming examples involving polymorphic recursion and determines their typability under various type systems, including the Hindley-Milner system, an intersection-type system, and extensions of these two. The goal of this report is to show that many of these examples are typable using a system of intersection types as an alternative form of polymorphism. By accomplishing this, we hope to lay the foundation for future research into a decidable intersection-type inference algorithm. We do not provide a comprehensive survey of type systems appropriate for polymorphic recursion, with or without type annotations inserted in the source language. Rather, we focus on examples for which types may be inferred without type annotations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an online distributed algorithm, the Causation Logging Algorithm (CLA), in which Autonomous Systems (ASes) in the Internet individually report route oscillations/flaps they experience to a central Internet Routing Registry (IRR). The IRR aggregates these reports and may observe what we call causation chains where each node on the chain caused a route flap at the next node along the chain. A chain may also have a causation cycle. The type of an observed causation chain/cycle allows the IRR to infer the underlying policy routing configuration (i.e., the system of economic relationships and constraints on route/path preferences). Our algorithm is based on a formal policy routing model that captures the propagation dynamics of route flaps under arbitrary changes in topology or path preferences. We derive invariant properties of causation chains/cycles for ASes which conform to economic relationships based on the popular Gao-Rexford model. The Gao-Rexford model is known to be safe in the sense that the system always converges to a stable set of paths under static conditions. Our CLA algorithm recovers the type/property of an observed causation chain of an underlying system and determines whether it conforms to the safe economic Gao-Rexford model. Causes for nonconformity can be diagnosed by comparing the properties of the causation chains with those predicted from different variants of the Gao-Rexford model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For any q > 1, let MOD_q be a quantum gate that determines if the number of 1's in the input is divisible by q. We show that for any q,t > 1, MOD_q is equivalent to MOD_t (up to constant depth). Based on the case q=2, Moore has shown that quantum analogs of AC^(0), ACC[q], and ACC, denoted QAC^(0)_wf, QACC[2], QACC respectively, define the same class of operators, leaving q > 2 as an open question. Our result resolves this question, implying that QAC^(0)_wf = QACC[q] = QACC for all q. We also prove the first upper bounds for QACC in terms of related language classes. We define classes of languages EQACC, NQACC (both for arbitrary complex amplitudes) and BQACC (for rational number amplitudes) and show that they are all contained in TC^(0). To do this, we show that a TC^(0) circuit can keep track of the amplitudes of the state resulting from the application of a QACC operator using a constant width polynomial size tensor sum. In order to accomplish this, we also show that TC^(0) can perform iterated addition and multiplication in certain field extensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study develops a neuromorphic model of human lightness perception that is inspired by how the mammalian visual system is designed for this function. It is known that biological visual representations can adapt to a billion-fold change in luminance. How such a system determines absolute lightness under varying illumination conditions to generate a consistent interpretation of surface lightness remains an unsolved problem. Such a process, called "anchoring" of lightness, has properties including articulation, insulation, configuration, and area effects. The model quantitatively simulates such psychophysical lightness data, as well as other data such as discounting the illuminant, the double brilliant illusion, and lightness constancy and contrast effects. The model retina embodies gain control at retinal photoreceptors, and spatial contrast adaptation at the negative feedback circuit between mechanisms that model the inner segment of photoreceptors and interacting horizontal cells. The model can thereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A new anchoring mechanism, called the Blurred-Highest-Luminance-As-White (BHLAW) rule, helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural color images under variable lighting conditions, and is compared with the popular RETINEX model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adequate hand-washing has been shown to be a critical activity in preventing the transmission of infections such as MRSA in health-care environments. Hand-washing guidelines published by various health-care related institutions recommend a technique incorporating six hand-washing poses that ensure all areas of the hands are thoroughly cleaned. In this paper, an embedded wireless vision system (VAMP) capable of accurately monitoring hand-washing quality is presented. The VAMP system hardware consists of a low resolution CMOS image sensor and FPGA processor which are integrated with a microcontroller and ZigBee standard wireless transceiver to create a wireless sensor network (WSN) based vision system that can be retargeted at a variety of health care applications. The device captures and processes images locally in real-time, determines if hand-washing procedures have been correctly undertaken and then passes the resulting high-level data over a low-bandwidth wireless link. The paper outlines the hardware and software mechanisms of the VAMP system and illustrates that it offers an easy to integrate sensor solution to adequately monitor and improve hand hygiene quality. Future work to develop a miniaturized, low cost system capable of being integrated into everyday products is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to investigate the effect of the pad size ratio between the chip and board end of a solder joint on the shape of that solder joint in combination with the solder volume available. The shape of the solder joint is correlated to its reliability and thus of importance. For low density chip bond pad applications Flip Chip (FC) manufacturing costs can be kept down by using larger size board pads suitable for solder application. By using “Surface Evolver” software package the solder joint shapes associated with different size/shape solder preforms and chip/board pad ratios are predicted. In this case a so called Flip-Chip Over Hole (FCOH) assembly format has been used. Assembly trials involved the deposition of lead-free 99.3Sn0.7Cu solder on the board side, followed by reflow, an underfill process and back die encapsulation. During the assembly work pad off-sets occurred that have been taken into account for the Surface Evolver solder joint shape prediction and accurately matched the real assembly. Overall, good correlation was found between the simulated solder joint shape and the actual fabricated solder joint shapes. Solder preforms were found to exhibit better control over the solder volume. Reflow simulation of commercially available solder preform volumes suggests that for a fixed stand-off height and chip-board pad ratio, the solder volume value and the surface tension determines the shape of the joint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Instrumental music education is provided as an extra-curricular activity on a fee-paying basis by a small number of Education and Training Boards, formerly Vocational Education Committees (ETB/VECs) through specialist instrumental Music Services. Although all citizens’ taxes fund the public music provision, participation in instrumental music during school-going years is predominantly accessed by middle class families. A series of semistructured interviews sought to access the perceptions and beliefs of instrumental music education practitioners (N=14) in seven publicly-funded music services in Ireland. Canonical dispositions were interrogated and emergent themes were coded and analysed in a process of Grounded theory. The study draws on Foucault’s conception of discourse as a lens with which to map professional practices, and utilises Bourdieu’s analysis of the reproduction of social advantage to examine cultural assumptions, which may serve to privilege middle-class cultural choice to the exclusion of other social groups. Study findings show that within the Music Services, aesthetic and pedagogic discourses of the 19th century Conservatory system exert a hegemonic influence over policy and practice. An enduring ‘examination culture’ located within the Western art music tradition determines pedagogy, musical genre, and assessment procedures. Ideologies of musical taste and value reinforce the more tangible boundaries of fee-payment and restricted availability as barriers to access. Practitioners are aware of a status duality whereby instrumental teachers working as visiting specialists in primary schools experience a conflict between specialist and generalist educational aims. Nevertheless, study participants consistently advocated siting the point of access to instrumental music education in the primary schools as the most equitable means of access to instrumental music education. This study addresses a ‘knowledge gap’ in the sociology of music education in Ireland. It provides a framework for rethinking instrumental music education as equitable in-school musical participation. The conclusions of the study suggest starting-points for further educational research and may provide key ‘prompts’ for curriculum planning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Stroke is one of the most disabling and costly impairments of adulthood in the United States. Stroke patients clearly benefit from intensive inpatient care, but due to the high cost, there is considerable interest in implementing interventions to reduce hospital lengths of stay. Early discharge rehabilitation programs require coordinated, well-organized home-based rehabilitation, yet lack of sufficient information about the home setting impedes successful rehabilitation. This trial examines a multifaceted telerehabilitation (TR) intervention that uses telehealth technology to simultaneously evaluate the home environment, assess the patient's mobility skills, initiate rehabilitative treatment, prescribe exercises tailored for stroke patients and provide periodic goal oriented reassessment, feedback and encouragement. METHODS: We describe an ongoing Phase II, 2-arm, 3-site randomized controlled trial (RCT) that determines primarily the effect of TR on physical function and secondarily the effect on disability, falls-related self-efficacy, and patient satisfaction. Fifty participants with a diagnosis of ischemic or hemorrhagic stroke will be randomly assigned to one of two groups: (a) TR; or (b) Usual Care. The TR intervention uses a combination of three videotaped visits and five telephone calls, an in-home messaging device, and additional telephonic contact as needed over a 3-month study period, to provide a progressive rehabilitative intervention with a treatment goal of safe functional mobility of the individual within an accessible home environment. Dependent variables will be measured at baseline, 3-, and 6-months and analyzed with a linear mixed-effects model across all time points. DISCUSSION: For patients recovering from stroke, the use of TR to provide home assessments and follow-up training in prescribed equipment has the potential to effectively supplement existing home health services, assist transition to home and increase efficiency. This may be particularly relevant when patients live in remote locations, as is the case for many veterans. TRIAL REGISTRATION: Clinical Trials.gov Identifier: NCT00384748.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seminal work by Weitzman (1974) revealed prices are preferred to quantities when marginal benefits are relatively flat compared to marginal costs. We extend this comparison to indexed policies, where quantities are proportional to an index, such as output. We find that policy preferences hinge on additional parameters describing the first and second moments of the index and the ex post optimal quantity level. When the ratio of these variables' coefficients of variation divided by their correlation is less than approximately two, indexed quantities are preferred to fixed quantities. A slightly more complex condition determines when indexed quantities are preferred to prices. Applied to climate change policy, we find that the range of variation and correlation in country-level carbon dioxide emissions and GDP suggests the ranking of an emissions intensity cap (indexed to GDP) compared to a fixed emission cap is not uniform across countries; neither policy clearly dominates the other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; American Psychiatric Association, 2000). The model accounts for important and reliable findings that are often inconsistent with the current diagnostic view and that have been neglected by theoretical accounts of the disorder, including the following observations. The diagnosis needs objective information about the trauma and peritraumatic emotions but uses retrospective memory reports that can have substantial biases. Negative events and emotions that do not satisfy the current diagnostic criteria for a trauma can be followed by symptoms that would otherwise qualify for PTSD. Predisposing factors that affect the current memory have large effects on symptoms. The inability-to-recall-an-important-aspect-of-the-trauma symptom does not correlate with other symptoms. Loss or enhancement of the trauma memory affects PTSD symptoms in predictable ways. Special mechanisms that apply only to traumatic memories are not needed, increasing parsimony and the knowledge that can be applied to understanding PTSD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nutritional status is critically important for immune cell function. While obesity is characterized by inflammation that promotes metabolic syndrome including cardiovascular disease and insulin resistance, malnutrition can result in immune cell defects and increased risk of mortality from infectious diseases. T cells play an important role in the immune adaptation to both obesity and malnutrition. T cells in obesity have been shown to have an early and critical role in inducing inflammation, accompanying the accumulation of inflammatory macrophages in obese adipose tissue, which are known to promote insulin resistance. How T cells are recruited to adipose tissue and activated in obesity is a topic of considerable interest. Conversely, T cell number is decreased in malnourished individuals, and T cells in the setting of malnutrition have decreased effector function and proliferative capacity. The adipokine leptin, which is secreted in proportion to adipocyte mass, may have a key role in mediating adipocyte-T cell interactions in both obesity and malnutrition, and has been shown to promote effector T cell function and metabolism while inhibiting regulatory T cell proliferation. Additionally, key molecular signals are involved in T cell metabolic adaptation during nutrient stress; among them, the metabolic regulator AMP kinase and the mammalian target of rapamycin have critical roles in regulating T cell number, function, and metabolism. In summary, understanding how T cell number and function are altered in obesity and malnutrition will lead to better understanding of and treatment for diseases where nutritional status determines clinical outcome.