944 resultados para Hamming ball
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
In this paper, we introduce the Generalized Equality Classifier (GEC) for use as an unsupervised clustering algorithm in categorizing analog data. GEC is based on a formal definition of inexact equality originally developed for voting in fault tolerant software applications. GEC is defined using a metric space framework. The only parameter in GEC is a scalar threshold which defines the approximate equality of two patterns. Here, we compare the characteristics of GEC to the ART2-A algorithm (Carpenter, Grossberg, and Rosen, 1991). In particular, we show that GEC with the Hamming distance performs the same optimization as ART2. Moreover, GEC has lower computational requirements than AR12 on serial machines.
Resumo:
A neural model is presented of how cortical areas V1, V2, and V4 interact to convert a textured 2D image into a representation of curved 3D shape. Two basic problems are solved to achieve this: (1) Patterns of spatially discrete 2D texture elements are transformed into a spatially smooth surface representation of 3D shape. (2) Changes in the statistical properties of texture elements across space induce the perceived 3D shape of this surface representation. This is achieved in the model through multiple-scale filtering of a 2D image, followed by a cooperative-competitive grouping network that coherently binds texture elements into boundary webs at the appropriate depths using a scale-to-depth map and a subsequent depth competition stage. These boundary webs then gate filling-in of surface lightness signals in order to form a smooth 3D surface percept. The model quantitatively simulates challenging psychophysical data about perception of prolate ellipsoids (Todd and Akerstrom, 1987, J. Exp. Psych., 13, 242). In particular, the model represents a high degree of 3D curvature for a certain class of images, all of whose texture elements have the same degree of optical compression, in accordance with percepts of human observers. Simulations of 3D percepts of an elliptical cylinder, a slanted plane, and a photo of a golf ball are also presented.
Resumo:
Selective isoelectric whey protein precipitation and aggregation is carried out at laboratory scale in a standard configuration batch agitation vessel. Geometric scale-up of this operation is implemented on the basis of constant impeller power input per unit volume and subsequent clarification is achieved by high speed disc-stack centrifugation. Particle size and fractal geometry are important in achieving efficient separation while aggregates need to be strong enough to resist the more extreme levels of shear that are encountered during processing, for example through pumps, valves and at the centrifuge inlet zone. This study investigates how impeller agitation intensity and ageing time affect aggregate size, strength, fractal dimension and hindered settling rate at laboratory scale in order to determine conditions conducive for improved separation. Particle strength is measured by observing the effects of subjecting aggregates to moderate and high levels of process shear in a capillary rig and through a partially open ball-valve respectively. The protein precipitate yield is also investigated with respect to ageing time and impeller agitation intensity. A pilot scale study is undertaken to investigate scale-up and how agitation vessel shear affects centrifugal separation efficiency. Laboratory scale studies show that precipitates subject to higher impeller shear-rates during the addition of the precipitation agent are smaller but more compact than those subject to lower impeller agitation and are better able to resist turbulent breakage. They are thus more likely to provide a better feed for more efficient centrifugal separation. Protein precipitation yield improves significantly with ageing, and 50 minutes of ageing is required to obtain a 70 - 80% yield of α-lactalbumin. Geometric scale-up of the agitation vessel at constant power per unit volume results in aggregates of broadly similar size exhibiting similar trends but with some differences due to the absence of dynamic similarity due to longer circulation time and higher tip speed in the larger vessel. Disc stack centrifuge clarification efficiency curves show aggregates formed at higher shear-rates separate more efficiently, in accordance with laboratory scale projections. Exposure of aggregates to highly turbulent conditions, even for short exposure times, can lead to a large reduction in particle size. Thus, improving separation efficiencies can be achieved by the identification of high shear zones in a centrifugal process and the subsequent elimination or amelioration of such.
Resumo:
A comparison study was carried out between a wireless sensor node with a bare die flip-chip mounted and its reference board with a BGA packaged transceiver chip. The main focus is the return loss (S parameter S11) at the antenna connector, which was highly depended on the impedance mismatch. Modeling including the different interconnect technologies, substrate properties and passive components, was performed to simulate the system in Ansoft Designer software. Statistical methods, such as the use of standard derivation and regression, were applied to the RF performance analysis, to see the impacts of the different parameters on the return loss. Extreme value search, following on the previous analysis, can provide the parameters' values for the minimum return loss. Measurements fit the analysis and simulation well and showed a great improvement of the return loss from -5dB to -25dB for the target wireless sensor node.
Inclusive education policy, the general allocation model and dilemmas of practice in primary schools
Resumo:
Background: Inclusive education is central to contemporary discourse internationally reflecting societies’ wider commitment to social inclusion. Education has witnessed transforming approaches that have created differing distributions of power, resource allocation and accountability. Multiple actors are being forced to consider changes to how key services and supports are organised. This research constitutes a case study situated within this broader social service dilemma of how to distribute finite resources equitably to meet individual need, while advancing inclusion. It focuses on the national directive with regard to inclusive educational practice for primary schools, Department of Education and Science Special Education Circular 02/05, which introduced the General Allocation Model (GAM) within the legislative context of the Education of Persons with Special Educational Needs (EPSEN) Act (Government of Ireland, 2004). This research could help to inform policy with ‘facts about what is happening on the ground’ (Quinn, 2013). Research Aims: The research set out to unearth the assumptions and definitions embedded within the policy document, to analyse how those who are at the coalface of policy, and who interface with multiple interests in primary schools, understand the GAM and respond to it, and to investigate its effects on students and their education. It examines student outcomes in the primary schools where the GAM was investigated. Methods and Sample The post-structural study acknowledges the importance of policy analysis which explicitly links the ‘bigger worlds’ of global and national policy contexts to the ‘smaller worlds’ of policies and practices within schools and classrooms. This study insists upon taking the detail seriously (Ozga, 1990). A mixed methods approach to data collection and analysis is applied. In order to secure the perspectives of key stakeholders, semi-structured interviews were conducted with primary school principals, class teachers and learning support/resource teachers (n=14) in three distinct mainstream, non-DEIS schools. Data from the schools and their environs provided a profile of students. The researcher then used the Pobal Maps Facility (available at www.pobal.ie) to identify the Small Area (SA) in which each student resides, and to assign values to each address based on the Pobal HP Deprivation Index (Haase and Pratschke, 2012). Analysis of the datasets, guided by the conceptual framework of the policy cycle (Ball, 1994), revealed a number of significant themes. Results: Data illustrate that the main model to support student need is withdrawal from the classroom under policy that espouses inclusion. Quantitative data, in particular, highlighted an association between segregated practice and lower socioeconomic status (LSES) backgrounds of students. Up to 83% of the students in special education programmes are from lower socio-economic status (LSES) backgrounds. In some schools 94% of students from LSES backgrounds are withdrawn from classrooms daily for special education. While the internal processes of schooling are not solely to blame for class inequalities, this study reveals the power of professionals to order children in school, which has implications for segregated special education practice. Such agency on the part of key actors in the context of practice relates to ‘local constructions of dis/ability’, which is influenced by teacher habitus (Bourdieu, 1984). The researcher contends that inclusive education has not resulted in positive outcomes for students from LSES backgrounds because it is built on faulty assumptions that focus on a psycho-medical perspective of dis/ability, that is, placement decisions do not consider the intersectionality of dis/ability with class or culture. This study argues that the student need for support is better understood as ‘home/school discontinuity’ not ‘disability’. Moreover, the study unearths the power of some parents to use social and cultural capital to ensure eligibility to enhanced resources. Therefore, a hierarchical system has developed in mainstream schools as a result of funding models to support need in inclusive settings. Furthermore, all schools in the study are ‘ordinary’ schools yet participants acknowledged that some schools are more ‘advantaged’, which may suggest that ‘ordinary’ schools serve to ‘bury class’ (Reay, 2010) as a key marker in allocating resources. The research suggests that general allocation models of funding to meet the needs of students demands a systematic approach grounded in reallocating funds from where they have less benefit to where they have more. The calculation of the composite Haase Value in respect of the student cohort in receipt of special education support adopted for this study could be usefully applied at a national level to ensure that the greatest level of support is targeted at greatest need. Conclusion: In summary, the study reveals that existing structures constrain and enable agents, whose interactions produce intended and unintended consequences. The study suggests that policy should be viewed as a continuous and evolving cycle (Ball, 1994) where actors in each of the social contexts have a shared responsibility in the evolution of education that is equitable, excellent and inclusive.
Resumo:
Gemstone Team CHIP
Resumo:
The rivalry between the men's basketball teams of Duke University and the University of North Carolina-Chapel Hill (UNC) is one of the most storied traditions in college sports. A subculture of students at each university form social bonds with fellow fans, develop expertise in college basketball rules, team statistics, and individual players, and self-identify as a member of a fan group. The present study capitalized on the high personal investment of these fans and the strong affective tenor of a Duke-UNC basketball game to examine the neural correlates of emotional memory retrieval for a complex sporting event. Male fans watched a competitive, archived game in a social setting. During a subsequent functional magnetic resonance imaging session, participants viewed video clips depicting individual plays of the game that ended with the ball being released toward the basket. For each play, participants recalled whether or not the shot went into the basket. Hemodynamic signal changes time locked to correct memory decisions were analyzed as a function of emotional intensity and valence, according to the fan's perspective. Results showed intensity-modulated retrieval activity in midline cortical structures, sensorimotor cortex, the striatum, and the medial temporal lobe, including the amygdala. Positively valent memories specifically recruited processing in dorsal frontoparietal regions, and additional activity in the insula and medial temporal lobe for positively valent shots recalled with high confidence. This novel paradigm reveals how brain regions implicated in emotion, memory retrieval, visuomotor imagery, and social cognition contribute to the recollection of specific plays in the mind of a sports fan.
Resumo:
The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval $[0,1]$ with dependence on a single parameter, $\lambda$. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on $\lambda$ and the behavior of the initial data around $1$. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.
Resumo:
© 2015 Human Kinetics, Inc.Background: Young children's physical activity (PA) is influenced by their child care environment. This study assessed PA practices in centers from Massachusetts (MA) and Rhode Island (RI), compared them to best practice recommendations, and assessed differences between states and center profit status. We also assessed weather-related practices. Methods: Sixty percent of MA and 54% of RI directors returned a survey, for a total of 254. Recommendations were 1) daily outdoor play, 2) providing outdoor play area, 3) limiting fixed play structures, 4) variety of portable play equipment, and 5) providing indoor play area. We fit multivariable linear regression models to examine adjusted associations between state, profit status, PA, and weather-related practices. Results: MA did not differ from RI in meeting PA recommendations (β = 0.03; 0.15, 0.21; P = .72), but MA centers scored higher on weather-related practices (β = 0.47; 0.16, 0.79; P = .004). For-profit centers had lower PA scores compared with nonprofits (β = -0.20; 95% CI: -0.38, -0.02; P = .03), but they did not differ for weather (β = 0.12; -0.19, 0.44; P = .44). Conclusions: More MA centers allowed children outside in light rain or snow. For-profit centers had more equipment-both fixed and portable. Results from this study may help inform interventions to increase PA in children.
Resumo:
En este artículo analizo los trabajos presentados en el decimoquinto estudio ICMI sobre formación de profesores de matemáticas. Este análisis da cuenta de la diversidad de contextos en los que tiene lugar dicha formación y la consecuente multiplicidad de modelos con los que los investigadores y formadores de profesores abordan esta cuestión. ¿Es posible identificar, dentro de esta diversidad, un núcleo común que permita conceptualizar el conocimiento del profesor de matemáticas y fundamentar programas de formación inicial? La propuesta "matemáticas para la enseñanza", de Ball y sus colaboradores, es una opción que surge del análisis de la práctica. Describo y critico esta propuesta, y sugiero una opción complementaria, de carácter analítico. Esta opción se basa en la caracterización de las actividades que idealmente debería realizar un profesor al planificar, llevar a la práctica y evaluar unidades didácticas. Con esta aproximación, es posible determinar sistemáticamente las capacidades que pueden contribuir al desarrollo de las competencias profesionales del profesor de matemáticas y, por lo tanto, fundamentar programas de formación inicial.
Resumo:
This paper details a modelling approach for assessing the in-service (field) reliability and thermal fatigue life-time of electronic package interconnects for components used in the assembly of an aerospace system. The Finite Element slice model of a Plastic Ball Grid Array (PBGA) package and suitable energy based damage models for crack length predictions are used in this study. Thermal fatigue damage induced in tin-lead solder joints are investigated by simulating the crack growth process under a set of prescribed field temperature profiles that cover the period of operational life. The overall crack length in the solder joint for all different thermal profiles and number of cycles for each profile is predicted using a superposition technique. The effect of using an underfill is also presented. A procedure for verifying the field lifetime predictions for the electronic package by using reliability assessment under Accelerated Thermal Cycle (ATC) testing is also briefly outlined.
Resumo:
Self-alignment of soldered electronic components such as flip-chips (FC), ball grid arrays (BGA) and optoelectronic devices during solder reflow is important as it ensures good alignment between components and substrates. Two uncoupled analytical models are presented which provide estimates of the dynamic time scales of both the chip and the solder in the self-alignment process. These predicted time scales can be used to decide whether a coupled dynamic analysis is required for the analysis of the chip motion. In this paper, we will show that for flip-chips, the alignment dynamics can be described accurately only when the chip motion is coupled with the solder motion because the two have similar time-scale values. To study this coupled phenomenon, a dynamic modeling method has been developed. The modeling results show that the uncoupled and coupled calculations result in significantly different predictions. The calculations based on the coupled model predict much faster rates of alignment than those predicted using the uncoupled approach.
Resumo:
Purpose – To present key challenges associated with the evolution of system-in-package technologies and present technical work in reliability modeling and embedded test that contributes to these challenges. Design/methodology/approach – Key challenges have been identified from the electronics and integrated MEMS industrial sectors. Solutions to optimising the reliability of a typical assembly process and reducing the cost of production test have been studied through simulation and modelling studies based on technology data released by NXP and in collaboration with EDA tool vendors Coventor and Flomerics. Findings – Characterised models that deliver special and material dependent reliability data that can be used to optimize robustness of SiP assemblies together with results that indicate relative contributions of various structural variables. An initial analytical model for solder ball reliability and a solution for embedding a low cost test for a capacitive RF-MEMS switch identified as an SiP component presenting a key test challenge. Research limitations/implications – Results will contribute to the further development of NXP wafer level system-in-package technology. Limitations are that feedback on the implementation of recommendations and the physical characterisation of the embedded test solution. Originality/value – Both the methodology and associated studies on the structural reliability of an industrial SiP technology are unique. The analytical model for solder ball life is new as is the embedded test solution for the RF-MEMS switch.