944 resultados para least mean-square methods
Resumo:
The algorithms and graphic user interface software package ?OPT-PROx? are developed to meet food engineering needs related to canned food thermal processing simulation and optimization. The adaptive random search algorithm and its modification coupled with penalty function?s approach, and the finite difference methods with cubic spline approximation are utilized by ?OPT-PROx? package (http://tomakechoice. com/optprox/index.html). The diversity of thermal food processing optimization problems with different objectives and required constraints are solvable by developed software. The geometries supported by the ?OPT-PROx? are the following: (1) cylinder, (2) rectangle, (3) sphere. The mean square error minimization principle is utilized in order to estimate the heat transfer coefficient of food to be heated under optimal condition. The developed user friendly dialogue and used numerical procedures makes the ?OPT-PROx? software useful to food scientists in research and education, as well as to engineers involved in optimization of thermal food processing.
Resumo:
Distributed target tracking in wireless sensor networks (WSN) is an important problem, in which agreement on the target state can be achieved using conventional consensus methods, which take long to converge. We propose distributed particle filtering based on belief propagation (DPF-BP) consensus, a fast method for target tracking. According to our simulations, DPF-BP provides better performance than DPF based on standard belief consensus (DPF-SBC) in terms of disagreement in the network. However, in terms of root-mean square error, it can outperform DPF-SBC only for a specific number of consensus iterations.
Resumo:
This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.
Resumo:
In the framework of the ITER Control Breakdown Structure (CBS), Plant System Instrumentation & Control (I&C) defines the hardware and software required to control one or more plant systems [1]. For diagnostics, most of the complex Plant System I&C are to be delivered by ITER Domestic Agencies (DAs). As an example for the DAs, ITER Organization (IO) has developed several use cases for diagnostics Plant System I&C that fully comply with guidelines presented in the Plant Control Design Handbook (PCDH) [2]. One such use case is for neutron diagnostics, specifically the Fission Chamber (FC), which is responsible for delivering time-resolved measurements of neutron source strength and fusion power to aid in assessing the functional performance of ITER [3]. ITER will deploy four Fission Chamber units, each consisting of three individual FC detectors. Two of these detectors contain Uranium 235 for Neutron detection, while a third "dummy" detector will provide gamma and noise detection. The neutron flux from each MFC is measured by the three methods: . Counting Mode: measures the number of individual pulses and their location in the record. Pulse parameters (threshold and width) are user configurable. . Campbelling Mode (Mean Square Voltage): measures the RMS deviation in signal amplitude from its average value. .Current Mode: integrates the signal amplitude over the measurement period
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
Progress in homology modeling and protein design has generated considerable interest in methods for predicting side-chain packing in the hydrophobic cores of proteins. Present techniques are not practically useful, however, because they are unable to model protein main-chain flexibility. Parameterization of backbone motions may represent a general and efficient method to incorporate backbone relaxation into such fixed main-chain models. To test this notion, we introduce a method for treating explicitly the backbone motions of alpha-helical bundles based on an algebraic parameterization proposed by Francis Crick in 1953 [Crick, F. H. C. (1953) Acta Crystallogr. 6, 685-689]. Given only the core amino acid sequence, a simple calculation can rapidly reproduce the crystallographic main-chain and core side-chain structures of three coiled coils (one dimer, one trimer, and one tetramer) to within 0.6-A root-mean-square deviations. The speed of the predictive method [approximately 3 min per rotamer choice on a Silicon Graphics (Mountain View, CA) 4D/35 computer] permits it to be used as a design tool.
Resumo:
Esta tese apresenta uma abordagem para a criação rápida de modelos em diferentes geometrias (complexas ou de alta simetria) com objetivo de calcular a correspondente intensidade espalhada, podendo esta ser utilizada na descrição de experimentos de es- palhamento à baixos ângulos. A modelagem pode ser realizada com mais de 100 geome- trias catalogadas em um Banco de Dados, além da possibilidade de construir estruturas a partir de posições aleatórias distribuídas na superfície de uma esfera. Em todos os casos os modelos são gerados por meio do método de elementos finitos compondo uma única geometria, ou ainda, compondo diferentes geometrias, combinadas entre si a partir de um número baixo de parâmetros. Para realizar essa tarefa foi desenvolvido um programa em Fortran, chamado de Polygen, que permite modelar geometrias convexas em diferentes formas, como sólidos, cascas, ou ainda com esferas ou estruturas do tipo DNA nas arestas, além de usar esses modelos para simular a curva de intensidade espalhada para sistemas orientados e aleatoriamente orientados. A curva de intensidade de espalhamento é calculada por meio da equação de Debye e os parâmetros que compõe cada um dos modelos, podem ser otimizados pelo ajuste contra dados experimentais, por meio de métodos de minimização baseados em simulated annealing, Levenberg-Marquardt e algorítmicos genéticos. A minimização permite ajustar os parâmetros do modelo (ou composição de modelos) como tamanho, densidade eletrônica, raio das subunidades, entre outros, contribuindo para fornecer uma nova ferramenta para modelagem e análise de dados de espalhamento. Em outra etapa desta tese, é apresentado o design de modelos atomísticos e a sua respectiva simulação por Dinâmica Molecular. A geometria de dois sistemas auto-organizado de DNA na forma de octaedro truncado, um com linkers de 7 Adeninas e outro com linkers de ATATATA, foram escolhidas para realizar a modelagem atomística e a simulação por Dinâmica Molecular. Para este sistema são apresentados os resultados de Root Mean Square Deviations (RMSD), Root Mean Square Fluctuations (RMSF), raio de giro, torção das hélices duplas de DNA além da avaliação das ligações de Hidrogênio, todos obtidos por meio da análise de uma trajetória de 50 ns.
Resumo:
Purpose To evaluate visual, optical, and quality of life (QoL) outcomes and intercorrelations after bilateral implantation of posterior chamber phakic intraocular lenses. Methods Twenty eyes with high to moderate myopia of 10 patients that underwent PRL implantation (Phakic Refractive Lens, Carl Zeiss Meditec AG) were examined. Refraction, visual acuity, photopic and low mesopic contrast sensitivity (CS) with and without glare, ocular aberrations, as well as QoL outcomes (National Eye Institute Refractive Error Quality of Life Instrument-42, NEI RQL-42) were evaluated at 12 months postoperatively. Results Significant improvement in uncorrected (UDVA) and best-corrected distance (CDVA) visual acuities were found postoperatively (p < 0.01), with significant reduction in spherical equivalent (p < 0.01). Low mesopic CS without glare was significantly better than measurements with glare for 1.5, 3, and 6 cycles/degree (p < 0.01). No significant correlations between higher order root mean square (RMS) with CDVA (r = −0.26, p = 0.27) and CS (r ≤ 0.45, p ≥ 0.05) were found. Postoperative binocular photopic CS for 12 cycles/degree and 18 cycles/degree correlated significantly with several RQL-42 scales. Glare index correlated significantly with CS measures and scotopic pupil size (r = −0.551, p = 0.04), but not with higher order RMS (r = −0.02, p = 0.94). Postoperative higher order RMS, postoperative primary coma and postoperative spherical aberration was significant higher for 5-mm pupil diameter (p < 0.01) compared with controls. Conclusions Correction of moderate to high myopia by means of PRL implantation had a positive impact on CS and QoL. The aberrometric increase induced by the surgery does not seem to limit CS and QoL. However, perception of glare is still a relevant disturbance in some cases possibly related to the limitation of the optical zone of the PRL.
Reverse Geometry Hybrid Contact Lens Fitting in a Case of Donor-Host Misalignment after Keratoplasty
Resumo:
Purpose: To report the successful outcome obtained after fitting a new hybrid contact lens in a cornea with an area of donor-host misalignment and significant levels of irregular astigmatism after penetrating keratoplasty (PKP). Materials and methods: A 41-year-old female with bilateral asymmetric keratoconus underwent PKP in her left eye due to the advanced status of the disease. One year after surgery, the patient referred a poor visual acuity and quality in this eye. The fitting of different types of rigid gas permeable contact lenses was performed, but with an unsuccessful outcome due to contact lens stability problems and uncomfortable wear. Scheimpflug imaging evaluation revealed that a donor-host misalignment was present at the nasal area. Contact lens fitting with a reverse geometry hybrid contact lens (Clearkone, SynergEyes Carlsbad) was then fitted. Visual, refractive, and ocular aberrometric outcomes were evaluated during a 1-year period after the fitting. Results: Uncorrected distance visual acuity improved from a prefitting value of 20/200 to a best corrected postfitting value of 20/20. Prefitting manifest refraction was +5.00 sphere and -5.50 cylinder at 75°, with a corrected distance visual acuity of 20/30. Higher order root mean square (RMS) for a 5 mm pupil changed from a prefitting value of 6.83 µm to a postfitting value of 1.57 µm (5 mm pupil). The contact lens wearing was referred as comfortable, with no anterior segment alterations. Conclusion: The SynergEyes Clearkone contact lens seems to be another potentially useful option for the visual rehabilitation after PKP, especially in cases of donor-host misalignment.
Resumo:
Purpose: To report a very successful outcome obtained with the fitting of a new-generation hybrid contact lens of reverse geometry in a thin cornea with extreme irregularity due to the presence of a central island after unsuccessful myopic excimer laser refractive surgery. Methods: A 32-year-old man attended to our clinic complaining of very poor vision in his right eye after bilateral laser in situ keratomileusis (treatment or surgery) for myopia correction and some additional retreatments afterward. After a comprehensive ocular evaluation, contact lens fitting with a reverse geometry hybrid contact lens (SynergEyes PS, SynergEyes, Carlsbad, CA) was proposed as a solution for this case. Visual, refractive, and ocular aberrometric outcomes with the contact lens were evaluated. Results: Distance visual acuity improved from a prefitting uncorrected value of 20/200 to a postfitting corrected value of 20/16. Prefitting manifest refraction was +6.00 sphere and −3.00 cylinder at 70°, with a corrected distance visual acuity of 20/40. Higher order root mean square for a 5-mm pupil changed from a prefitting value of 1.45 to 0.34 µm with the contact lens. The contact lens wearing was reported as comfortable, and the patient was very satisfied with this solution. Conclusions: The SynergEyes PS contact lens seems to be an excellent option for the visual rehabilitation of corneas with extreme irregularity after myopic excimer laser surgery, minimizing the level of higher order aberrations and providing an excellent visual outcome.
Resumo:
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Resumo:
LIDAR (LIght Detection And Ranging) first return elevation data of the Boston, Massachusetts region from MassGIS at 1-meter resolution. This LIDAR data was captured in Spring 2002. LIDAR first return data (which shows the highest ground features, e.g. tree canopy, buildings etc.) can be used to produce a digital terrain model of the Earth's surface. This dataset consists of 74 First Return DEM tiles. The tiles are 4km by 4km areas corresponding with the MassGIS orthoimage index. This data set was collected using 3Di's Digital Airborne Topographic Imaging System II (DATIS II). The area of coverage corresponds to the following MassGIS orthophoto quads covering the Boston region (MassGIS orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 233906, 233910, 237890, 237894, 237898, 237902, 237906, 237910, 241890, 241894, 241898, 241902, 245898, 245902). The geographic extent of this dataset is the same as that of the MassGIS dataset: Boston, Massachusetts Region 1:5,000 Color Ortho Imagery (1/2-meter Resolution), 2001 and was used to produce the MassGIS dataset: Boston, Massachusetts, 2-Dimensional Building Footprints with Roof Height Data (from LIDAR data), 2002 [see cross references].
Resumo:
This dataset consists of 2D footprints of the buildings in the metropolitan Boston area, based on tiles in the orthoimage index (orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 237890, 237894, 237898, 237902, 241890, 241894, 241898, 241902, 245898, 245902). This data set was collected using 3Di's Digital Airborne Topographic Imaging System II (DATIS II). Roof height and footprint elevation attributes (derived from 1-meter resolution LIDAR (LIght Detection And Ranging) data) are included as part of each building feature. This data can be combined with other datasets to create 3D representations of buildings and the surrounding environment.
Resumo:
Study Design. Cross-sectional study. Objective. This study compared neck muscle activation patterns during and after a repetitive upper limb task between patients with idiopathic neck pain, whiplash-associated disorders, and controls. Summary of Background Data. Previous studies have identified altered motor control of the upper trapezius during functional tasks in patients with neck pain. Whether the cervical flexor muscles demonstrate altered motor control during functional activities is unknown. Methods. Electromyographic activity was recorded from the sternocleidomastoid, anterior scalenes, and upper trapezius muscles. Root mean square electromyographic amplitude was calculated during and on completion of a functional task. Results. A general trend was evident to suggest greatest electromyograph amplitude in the sternocleidomastoid, anterior scalenes, and left upper trapezius muscles for the whiplash-associated disorders group, followed by the idiopathic group, with lowest electromyographic amplitude recorded for the control group. A reverse effect was apparent for the right upper trapezius muscle. The level of perceived disability ( Neck Disability Index score) had a significant effect on the electromyographic amplitude recorded between neck pain patients. Conclusions. Patients with neck pain demonstrated greater activation of accessory neck muscles during a repetitive upper limb task compared to asymptomatic controls. Greater activation of the cervical muscles in patients with neck pain may represent an altered pattern of motor control to compensate for reduced activation of painful muscles. Greater perceived disability among patients with neck pain accounted for the greater electromyographic amplitude of the superficial cervical muscles during performance of the functional task.
Resumo:
Study Design. Cross-sectional study. Objective. The present study compared activity of deep and superficial cervical flexor muscles and craniocervical flexion range of motion during a test of craniocervical flexion between 10 patients with chronic neck pain and 10 controls. Summary of Background Data. Individuals with chronic neck pain exhibit reduced performance on a test of craniocervical flexion, and training of this maneuver is effective in management of neck complaints. Although this test is hypothesized to reflect dysfunction of the deep cervical flexor muscles, this has not been tested. Methods. Deep cervical flexor electromyographic activity was recorded with custom electrodes inserted via the nose and fixed by suction to the posterior mucosa of the oropharynx. Surface electrodes were placed over the superficial neck muscles ( sternocleidomastoid and anterior scalene). Root mean square electromyographic amplitude and craniocervical flexion range of motion was measured during five incremental levels of craniocervical flexion in supine. Results. There was a strong linear relation between the electromyographic amplitude of the deep cervical flexor muscles and the incremental stages of the craniocervical flexion test for control and individuals with neck pain ( P = 0.002). However, the amplitude of deep cervical flexor electromyographic activity was less for the group with neck pain than controls, and this difference was significant for the higher increments of the task ( P < 0.05). Although not significant, there was a strong trend for greater sternocleidomastoid and anterior scalene electromyographic activity for the group with neck pain. Conclusions. These data confirm that reduced performance of the craniocervical flexion test is associated with dysfunction of the deep cervical flexor muscles and support the validity of this test for patients with neck pain.