980 resultados para universal chip control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se trata de estudiar el comportamiento de un sistema basado en el chip CC1110 de Texas Instruments, para aplicaciones inalámbricas. Los dispositivos basados en este tipo de chips tienen actualmente gran profusión, dada la demanda cada vez mayor de aplicaciones de gestión y control inalámbrico. Por ello, en la primera parte del proyecto se presenta el estado del arte referente a este aspecto, haciendo mención a los sistemas operativos embebidos, FPGAs, etc. También se realiza una introducción sobre la historia de los aviones no tripulados, que son el vehículo elegido para el uso del enlace de datos. En una segunda parte se realiza el estudio del dispositivo mediante una placa de desarrollo, verificando y comprobando mediante el software suministrado, el alcance del mismo. Cabe resaltar en este punto que el control con la placa mencionada se debe hacer mediante programación de bajo nivel (lenguaje C), lo que aporta gran versatilidad a las aplicaciones que se pueden desarrollar. Por ello, en una tercera parte se realiza un programa funcional, basado en necesidades aportadas por la empresa con la que se colabora en el proyecto (INDRA). Este programa es realizado sobre el entorno de Matlab, muy útil para este tipo de aplicaciones, dada su versatilidad y gran capacidad de cálculo con variables. Para terminar, con la realización de dichos programas, se realizan pruebas específicas para cada uno de ellos, realizando pruebas de campo en algunas ocasiones, con vehículos los más similares a los del entorno real en el que se prevé utilizar. Como implementación al programa realizado, se incluye un manual de usuario con un formato muy gráfico, para que la toma de contacto se realice de una manera rápida y sencilla. Para terminar, se plantean líneas futuras de aplicación del sistema, conclusiones, presupuesto y un anexo con los códigos de programación más importantes. Abstract In this document studied the system behavior based on chip CC1110 of Texas Instruments, for wireless applications. These devices currently have profusion. Right the increasing demand for control and management wireless applications. In the first part of project presents the state of art of this aspect, with reference to the embedded systems, FPGAs, etc. It also makes a history introduction of UAVs, which are the vehicle for use data link. In the second part is studied the device through development board, verifying and checking with provided software the scope. The board programming is C language; this gives a good versatility to develop applications. Thus, in third part performing a functionally program, it based on requirements provided by company with which it collaborates, INDRA Company. This program is developed with Matlab, very useful for such applications because of its versatility and ability to use variables. Finally, with the implementation of such programs, specific tests are performed for each of them, field tests are performed in several cases, and vehicles used for this are the most similar to the actual environment plain to use. Like implementing with the program made, includes a graphical user manual, so your understanding is conducted quickly and easily. Ultimately, present future targets for system applications, conclusions, budget and annex of the most important programming codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 3-year Project financed by the European Commission is aimed at developing a universal system to de-orbit satellites at their end of life, as a fundamental contribution to limit the increase of debris in the Space environment. The operational system involves a conductive tapetether left bare to establish anodic contact with the ambient plasma as a giant Langmuir probe. The Project will size the three disparate dimensions of a tape for a selected de-orbit mission and determine scaling laws to allow system design for a general mission. Starting at the second year, mission selection is carried out while developing numerical codes to implement control laws on tether dynamics in/off the orbital plane; performing numerical simulations and plasma chamber measurements on tether-plasma interaction; and completing design of subsystems: electronejecting plasma contactor, power module, interface elements, deployment mechanism, and tether-tape/end-mass. This will be followed by subsystems manufacturing and by currentcollection, free-fall, and hypervelocity impact tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La utilización de túneles aerodinámicos en ingeniería civil está cada vez más demandada debido al actual desarrollo urbanístico, esto es, la necesidad de edificios cada vez más altos en los que concentrar mayor cantidad de población, puentes y estructuras que faciliten el paso de medios de transporte alternativos, la importancia de los aspectos artísticos en la construcción (además de los funcionales), etc. Son muchos los factores que pueden hacer necesario el ensayo de alguna de esas estructuras en un túnel aerodinámico, y no existe un criterio universal a la hora de decidir si conviene o no hacerlo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have investigated mRNA 3′-end-processing signals in each of six eukaryotic species (yeast, rice, arabidopsis, fruitfly, mouse, and human) through the analysis of more than 20,000 3′-expressed sequence tags. The use and conservation of the canonical AAUAAA element vary widely among the six species and are especially weak in plants and yeast. Even in the animal species, the AAUAAA signal does not appear to be as universal as indicated by previous studies. The abundance of single-base variants of AAUAAA correlates with their measured processing efficiencies. As found previously, the plant polyadenylation signals are more similar to those of yeast than to those of animals, with both common content and arrangement of the signal elements. In all species examined, the complete polyadenylation signal appears to consist of an aggregate of multiple elements. In light of these and previous results, we present a broadened concept of 3′-end-processing signals in which no single exact sequence element is universally required for processing. Rather, the total efficiency is a function of all elements and, importantly, an inefficient word in one element can be compensated for by strong words in other elements. These complex patterns indicate that effective tools to identify 3′-end-processing signals will require more than consensus sequence identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cultura da cana-de-açúcar é a segunda maior movimentação econômica na cadeia do agronegócio no Brasil. Gera riquezas através da fabricação de açúcar, etanol e cogeração de energia elétrica, além de outros subprodutos. Considerada fonte de energia renovável, a cana-de- açúcar a princípio tinha sua imagem associada a impactos negativos principalmente devido as queimadas realizadas nas lavouras para colheita manual. Nos últimos anos, baseado em decretos e no Protocolo Agro-ambiental, essa prática vem sendo abolida. Para se manter e até mesmo aumentar o rendimento das colhedoras nos canaviais, os gestores têm adotado práticas para reduzir os terraços agrícolas, com impacto nos sistemas de conservação de solos. Assim, este trabalho teve como objetivo identificar os impactos ambientais provocados pela mecanização agrícola decorrente do mau manejo e dimensionamento dos mecanismos de conservação do solo. Neste estudo também se realizou uma análise à Equação Universal de Perdas de Solo (EUPS), como ferramenta para o dimensionamento de terraços agrícolas. O estudo foi realizado em uma microbacia hidrográfica, denominada Ribeirão da Bocaina, localizada na UGRHI-13 (Tietê - Jacaré). Foi possível identificar a variabilidade amostral do solo para o dimensionamento conservacionista, gerando curvas de nível com Desníveis Verticais (D.V) desuniformes, contrariando a sistemática atual de terraços que respeita cotas múltiplas ou mesmo dimensionamentos empiristas, segundo o conhecimento local e o histórico recente da área. Algumas sugestões também foram feitas afim de torná-la uma ferramenta ainda mais eficiente, considerando condições particulares à cultura da cana-de-açúcar, tais como a influência da palhada, sulcos de plantio e diversos tipos de terraços como meios de controle à erosão. A metodologia foi satisfatória, no que tange a compreensão pelos meios de correlação entre as práticas conservacionistas e modelos de predição de perda de solo, trazendo luz à ciência na interpretação das ferramentas existentes e as lacunas a serem preenchidas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I will start by discussing some aspects of Kagitcibasi’s Theory of Family Change: its current empirical status and, more importantly, its focus on universal human needs and the consequences of this focus. Family Change Theory’s focus on the universality of the basic human needs of autonomy and relatedness and its culture-level emphasis on cultural norms and family values as reflecting a culture’s capacity for fulfilling its members’ respective needs shows that the theory advocates balanced cultural norms of independence and interdependence. As a normative theory it therefore postulates the necessity of a synthetic family model of emotional interdependence as an alternative to extreme models of total independence and total interdependence. Generalizing from this I will sketch a theoretical model where a dynamic and dialectical process of the fit between individual and culture and between culture and universal human needs and related social practices is central. I will discuss this model using a recent cross-cultural project on implicit theories of self/world and primary/secondary control orientations as an example. Implications for migrating families and acculturating individuals are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The upper 200 m of the sediments recovered during IODP Leg 302, the Arctic Coring Expedition (ACEX), to the Lomonosov Ridge in the central Arctic Ocean consist almost exclusively of detrital material. The scarcity of biostratigraphic markers severely complicates the establishment of a reliable chronostratigraphic framework for these sediments, which contain the first continuous record of the Neogene environmental and climatic evolution of the Arctic region. Here we present profiles of cosmogenic 10Be together with the seawater-derived fraction of stable 9Be obtained from the ACEX cores. The down-core decrease of 10Be/9Be provides an average sedimentation rate of 14.5 ± 1 m/Ma for the uppermost 151 m of the ACEX record and allows the establishment of a chronostratigraphy for the past 12.3 Ma. The age-corrected 10Be concentrations and 10Be/9Be ratios suggest the existence of an essentially continuous sea ice cover over the past 12.3 Ma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"This publication was supported in part by Grant/Cooperative Agreement US50/CCU523303-04 and Early Hearing Detection and Intervention Award UR#CC1520048 from the U.S. Centers for Disease Control and Prevention. ..."--Leaf ii.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When can a quantum system of finite dimension be used to simulate another quantum system of finite dimension? What restricts the capacity of one system to simulate another? In this paper we complete the program of studying what simulations can be done with entangling many-qudit Hamiltonians and local unitary control. By entangling we mean that every qudit is coupled to every other qudit, at least indirectly. We demonstrate that the only class of finite-dimensional entangling Hamiltonians that are not universal for simulation is the class of entangling Hamiltonians on qubits whose Pauli operator expansion contains only terms coupling an odd number of systems, as identified by Bremner [Phys. Rev. A 69, 012313 (2004)]. We show that in all other cases entangling many-qudit Hamiltonians are universal for simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study. the authors examined the 2-, 3-, and 4-year outcomes of a school-based, universal approach to the prevention of adolescent depression. Despite initial short-term positive effects, these benefits were not maintained over time. Adolescents who completed the teacher-administered cognitive-behavioral intervention did not differ significantly from adolescents in the monitoring-control condition in terms of changes in depressive symptoms, problem solving, attributional style, or other indicators of psychopathology from preintervention to 4-year follow-up. Results were equivalent irrespective of initial level of depressive symptoms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: fall-related hip fractures are one of the most common causes of disability and mortality in older age. The study aimed to quantify the relationship between lifestyle behaviours and the risk of fall-related hip fracture in community-dwelling older people. The purpose was to contribute evidence for the promotion of healthy ageing as a population-based intervention for falls injury prevention. Methods: a case-control study was conducted with 387 participants, with a case-control ratio of 1:2. Incident cases of fall-related hip fracture in people aged 65 and over were recruited from six hospital sites in Brisbane, Australia, in 2003-04. Community-based controls, matched by age, sex and postcode, were recruited via electoral roll sampling. A questionnaire designed to assess lifestyle risk factors, identified as determinants of healthy ageing, was administered at face-to-face interviews. Results: behavioural factors which had a significant independent protective effect on the risk of hip fracture included never smoking [adjusted odds ratio (AOR): 0.33 (0.12-0.88)], moderate alcohol consumption in mid- and older age [AOR: 0.49 (0.25-0.95)], not losing weight between mid- and older age [AOR: 0.36 (0.20-0.65)], playing sport in older age [AOR: 0.49 (0.29-0.83)] and practising a greater number of preventive medical care [AOR: 0.54 (0.32-0.94)] and self-health behaviours [AOR: 0.56 (0.33-0.94)]. Conclusion: with universal exposures, clear associations and modifiable behavioural factors, this study has contributed evidence to reduce the major public health burden of fall-related hip fractures using readily implemented population-based healthy ageing strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated the long-term effectiveness of the FRIENDS Program in reducing anxiety and depression in a sample of children from Grade 6 and Grade 9 in comparison to a control condition. Longitudinal data for Lock and Barrett's (2003) universal prevention trial is presented, along with data from 12-month follow-up to 24- and 36-month follow-up. Results of this study indicate that intervention reductions in anxiety reported in Lock and Barrett were maintained for students in Grade 6, with the intervention group reporting significantly lower ratings of anxiety at long-term follow-up. A significant Time times Intervention Group times Gender Effect on Anxiety was found, with girls in the intervention group reporting significantly lower anxiety at 12-month and 24-month follow-up but not at 36-month follow-up in comparison to the control condition. Results demonstrated a prevention effect with significantly fewer high-risk students at 36-month follow-up in the intervention condition than in the control condition. Results are discussed within the context of prevention research.