912 resultados para validation tests of PTO


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The Health Belief Scale is a questionnaire used to assess a wide range of beliefs related to health. The objective of this study was to undertake construction and culturally adapt the Health Belief Scale (HBS) to the Portuguese language and to test its reliability and validity. Methods: This new version was obtained with forward/backward translations, consensus panels and a pre-test, having been inspired by some of the items from “Canada’s Health Promotion Survey” and the “European Health and Behaviour Survey”, with the inclusion of new items about food-related beliefs. The Portuguese version of Health Belief Scale and a form for the characteristics of the participants were applied to 849 Portuguese adolescents. Results: Reliability was good with a Cronbach’s alpha coeficient of 0.867, and an intraclass correlation coeficient (ICC) of 0.95. Corrected item-total coeficients ranged from 0.301 to 0.620 and weighted kappa coeficients ranged from 0.72 to 0.93 for the total scale items. We obtained a scale composed of 13 items divided into ive factors (smoking and alcohol belief, food belief, sexual belief, physical and sporting belief, and social belief), which explain 57.97% of the total variance. Conclusions: The scale exhibited suitable psychometric properties, in terms of internal consistency, reproducibility and construct validity. It can be used in various areas of research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta pesquisa objetivou desenvolver um modelo estatístico de previsão de vazão para Marabá - PA, bem como avaliar a estrutura dinâmica atmosférica associada aos extremos do regime hidrológico da bacia do rio Tocantins. O modelo hidrológico de regressão linear múltipla utilizou as séries de observações fluviométricas e pluviométricas obtidas no banco de dados da ANA. Os testes de validação do modelo estatístico com coeficiente de Nash acima de 0,9 e erro padrão de 1,5 % e 5% nos períodos de cheia e estiagem, respectivamente, permitem que as previsões de vazão em Marabá possam ser geradas com antecedência de 2 a 4 (3 a 5) dias para o período da cheia (estiagem). Através da técnica de composições considerando todos os anos com registro de vazão acima/muito acima e abaixo/muito abaixo do normal, obtidos pela metodologia dos percentis, investigaram-se as características regionais da precipitação e a estrutura dinâmica atmosférica em cada mês (Novembro a Abril). As composições dos anos com vazão acima/muito acima mostraram que a precipitação sobre a bacia foi acima do normal em todos os meses, sendo que os padrões de grande escala indicaram a configuração associada ao fenômeno La Niña no Pacífico e condições de resfriamento no Atlântico Sul; intensificação tanto do ramo ascendente zonal da célula de Walker como do ramo ascendente meridional da célula de Hadley; intensificação da Alta da Bolívia posicionada mais a leste e anomalias negativas de ROL associadas à atuação conjunta da ZCAS e ZCIT. Inversamente, as composições dos anos com vazão abaixo/muito abaixo evidenciaram a predominância de precipitação abaixo do normal em toda bacia hidrográfica, a qual se associou com as condições de aquecimento (El Niño) sobre o Pacífico, Atlântico sul aquecido, célula de Walker e Hadley com enfraquecimento dos movimentos ascendentes, posicionamento da Alta da Bolívia mais a oeste com anomalias positivas de ROL indicando inibição da atividade convectiva tropical. Adicionalmente, uma análise quantitativa dos impactos sócio-econômicos sobre os principais núcleos da cidade de Marabá revelou que aproximadamente 10 mil pessoas (5% da população) são atingidas pela cheia do rio Tocantins com custos nas operações de enchente acima de R$ 500.000,00, considerando o caso de 2005.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four non-destructive tests for determining the length of fatigue cracks within the solder joints of a 2512 surface mount resistor are investigated. The sensitivity of the tests is obtained using finite element analysis with some experimental validation. Three of the tests are mechanically based and one is thermally based. The mechanical tests all operate by applying different loads to the PCB and monitoring the strain response at the top of the resistor. The thermal test operates by applying a heat source underneath the PCB, and monitoring the temperature response at the top of the resistor. From the modelling work done, two of these tests have shown to be sensitive to cracks. Some experimental results are presented but further work is required to fully validate the simulation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An atmospheric combustion apparatus was designed through several iterations for Bucknell University's combustion laboratory. The final design required extensive fine-tuning of the fuel and air systems and repeated tests to arrive at a satisfactory procedure to transfer from gaseous to liquid fuel operation. Measurement of exhaust emissions were obtained under tests of gaseous methane and liquid heptane were operation in order to validate the functionality of the combustion apparatus, the fuel transition procedure, and emissions analyzer systems. The emission concentrations of CO, CO2, NOx, 02, S02, and unburned hydrocarbons from a multianalyzer and HFID analyzer were obtained for a range of equivalence ratios. The results verify the potential for future alternative fuel tests and illuminate necessary alterations for further liquid fuel studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research indicates that social identity theory offers an important lens to improve our understanding of founders as enterprising individuals, the venture creation process, and its outcomes. Yet, further advances are hindered by the lack of a valid scale that could be used to measure founders' social identities - a problem that is particularly severe because social identity is a multidimensional construct that needs to be assessed properly so that organizational phenomena can be understood. Drawing on social identity theory and the systematic classification of founders' social identities (Darwinians, Communitarians, Missionaries) provided in Fauchart and Gruber (2011), this study develops and empirically validates a 12-item scale that allows scholars to capture the multidimensional nature of social identities of entrepreneurs. Our validation tests are unusually comprehensive and solid, as we not only validate the developed scale in the Alpine region (where it was originally conceived), but also in 12 additional countries and the Anglo-American region. Scholars can use the scale to identify founders' social identities and to relate these identities to micro-level processes and outcomes in new firm creation. Scholars may also link founders' social identities to other levels of analysis such as industries (e.g., industry evolution) or whole economies (e.g., economic growth).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The proportion of older individuals in the driving population is predicted to increase in the next 50 years. This has important implications for driving safety as abilities which are important for safe driving, such as vision (which accounts for the majority of the sensory input required for driving), processing ability and cognition have been shown to decline with age. The current methods employed for screening older drivers upon re-licensure are also vision based. This study, which investigated social, behavioural and professional aspects involved with older drivers, aimed to determine: (i) if the current visual standards in place for testing upon re-licensure are effective in reducing the older driver fatality rate in Australia; (ii) if the recommended visual standards are actually implemented as part of the testing procedures by Australian optometrists; and (iii) if there are other non-standardised tests which may be better at predicting the on-road incident-risk (including near misses and minor incidents) in older drivers than those tests recommended in the standards. Methods: For the first phase of the study, state-based age- and gender-stratified numbers of older driver fatalities for 2000-2003 were obtained from the Australian Transportation Safety Bureau database. Poisson regression analyses of fatality rates were considered by renewal frequency and jurisdiction (as separate models), adjusting for possible confounding variables of age, gender and year. For the second phase, all practising optometrists in Australia were surveyed on the vision tests they conduct in consultations relating to driving and their knowledge of vision requirements for older drivers. Finally, for the third phase of the study to investigate determinants of on-road incident risk, a stratified random sample of 600 Brisbane residents aged 60 years and were selected and invited to participate using an introductory letter explaining the project requirements. In order to capture the number and type of road incidents which occurred for each participant over 12 months (including near misses and minor incidents), an important component of the prospective research study was the development and validation of a driving diary. The diary was a tool in which incidents that occurred could be logged at that time (or very close in time to which they occurred) and thus, in comparison with relying on participant memory over time, recall bias of incident occurrence was minimised. Association between all visual tests, cognition and scores obtained for non-standard functional tests with retrospective and prospective incident occurrence was investigated. Results: In the first phase,rivers aged 60-69 years had a 33% lower fatality risk (Rate Ratio [RR] = 0.75, 95% CI 0.32-1.77) in states with vision testing upon re-licensure compared with states with no vision testing upon re-licensure, however, because the CIs are wide, crossing 1.00, this result should be regarded with caution. However, overall fatality rates and fatality rates for those aged 70 years and older (RR=1.17, CI 0.64-2.13) did not differ between states with and without license renewal procedures, indicating no apparent benefit in vision testing legislation. For the second phase of the study, nearly all optometrists measured visual acuity (VA) as part of a vision assessment for re-licensing, however, 20% of optometrists did not perform any visual field (VF) testing and only 20% routinely performed automated VF on older drivers, despite the standards for licensing advocating automated VF as part of the vision standard. This demonstrates the need for more effective communication between the policy makers and those responsible for carrying out the standards. It may also indicate that the overall higher driver fatality rate in jurisdictions with vision testing requirements is resultant as the tests recommended by the standards are only partially being conducted by optometrists. Hence a standardised protocol for the screening of older drivers for re-licensure across the nation must be established. The opinions of Australian optometrists with regard to the responsibility of reporting older drivers who fail to meet the licensing standards highlighted the conflict between maintaining patient confidentiality or upholding public safety. Mandatory reporting requirements of those drivers who fail to reach the standards necessary for driving would minimise potential conflict between the patient and their practitioner, and help maintain patient trust and goodwill. The final phase of the PhD program investigated the efficacy of vision, functional and cognitive tests to discriminate between at-risk and safe older drivers. Nearly 80% of the participants experienced an incident of some form over the prospective 12 months, with the total incident rate being 4.65/10 000 km. Sixty-three percent reported having a near miss and 28% had a minor incident. The results from the prospective diary study indicate that the current vision screening tests (VA and VF) used for re-licensure do not accurately predict older drivers who are at increased odds of having an on-road incident. However, the variation in visual measurements of the cohort was narrow, also affecting the results seen with the visual functon questionnaires. Hence a larger cohort with greater variability should be considered for a future study. A slightly lower cognitive level (as measured with the Mini-Mental State Examination [MMSE]) did show an association with incident involvement as did slower reaction time (RT), however the Useful-Field-of-View (UFOV) provided the most compelling results of the study. Cut-off values of UFOV processing (>23.3ms), divided attention (>113ms), selective attention (>258ms) and overall score (moderate/ high/ very high risk) were effective in determining older drivers at increased odds of having any on-road incident and the occurrence of minor incidents. Discussion: The results have shown that for the 60-69 year age-group, there is a potential benefit in testing vision upon licence renewal. However, overall fatality rates and fatality rates for those aged 70 years and older indicated no benefit in vision testing legislation and suggests a need for inclusion of screening tests which better predict on-road incidents. Although VA is routinely performed by Australian optometrists on older drivers renewing their licence, VF is not. Therefore there is a need for a protocol to be developed and administered which would result in standardised methods conducted throughout the nation for the screening of older drivers upon re-licensure. Communication between the community, policy makers and those conducting the protocol should be maximised. By implementing a standardised screening protocol which incorporates a level of mandatory reporting by the practitioner, the ethical dilemma of breaching patient confidentiality would also be resolved. The tests which should be included in this screening protocol, however, cannot solely be ones which have been implemented in the past. In this investigation, RT, MMSE and UFOV were shown to be better determinants of on-road incidents in older drivers than VA and VF, however, as previously mentioned, there was a lack of variability in visual status within the cohort. Nevertheless, it is the recommendation from this investigation, that subject to appropriate sensitivity and specificity being demonstrated in the future using a cohort with wider variation in vision, functional performance and cognition, these tests of cognition and information processing should be added to the current protocol for the screening of older drivers which may be conducted at licensing centres across the nation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cold-formed steel members are extensively used in the building construction industry, especially in residential, commercial and industrial buildings. In recent times, fire safety has become important in structural design due to increased fire damage to properties and loss of lives. However, past research into the fire performance of cold-formed steel members has been limited, and was confined to compression members. Therefore a research project was undertaken to investigate the structural behaviour of compact cold-formed steel lipped channel beams subject to inelastic local buckling and yielding, and lateral-torsional buckling effects under simulated fire conditions and associated section and member moment capacities. In the first phase of this research, an experimental study based on tensile coupon tests was undertaken to obtain the mechanical properties of elastic modulus and yield strength and the stress-strain relationship of cold-formed steels at uniform ambient and elevated temperatures up to 700oC. The mechanical properties deteriorated with increasing temperature and are likely to reduce the strength of cold-formed beams under fire conditions. Predictive equations were developed for yield strength and elastic modulus reduction factors while a modification was proposed for the stressstrain model at elevated temperatures. These results were used in the numerical modelling phases investigating the section and member moment capacities. The second phase of this research involved the development and validation of two finite element models to simulate the behaviour of compact cold-formed steel lipped channel beams subject to local buckling and yielding, and lateral-torsional buckling effects. Both models were first validated for elastic buckling. Lateral-torsional buckling tests of compact lipped channel beams were conducted at ambient temperature in order to validate the finite element model in predicting the non-linear ultimate strength behaviour. The results from this experimental study did not agree well with those from the developed experimental finite element model due to some unavoidable problems with testing. However, it highlighted the importance of magnitude and direction of initial geometric imperfection as well as the failure direction, and thus led to further enhancement of the finite element model. The finite element model for lateral-torsional buckling was then validated using the available experimental and numerical ultimate moment capacity results from past research. The third phase based on the validated finite element models included detailed parametric studies of section and member moment capacities of compact lipped channel beams at ambient temperature, and provided the basis for similar studies at elevated temperatures. The results showed the existence of inelastic reserve capacity for compact cold-formed steel beams at ambient temperature. However, full plastic capacity was not achieved by the mono-symmetric cold-formed steel beams. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity. Comparison of member capacity results from finite element analyses with current design rules showed that they do not give accurate predictions of lateral-torsional buckling capacities at ambient temperature and hence new design rules were developed. The fourth phase of this research investigated the section and member moment capacities of compact lipped channel beams at uniform elevated temperatures based on detailed parametric studies using the validated finite element models. The results showed the existence of inelastic reserve capacity at elevated temperatures. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity in fire design codes, ambient temperature design codes as well as those proposed by other researchers. The results showed that lateral-torsional buckling capacities are dependent on the ratio of yield strength and elasticity modulus reduction factors and the level of non-linearity in the stress-strain curves at elevated temperatures in addition to the temperature. Current design rules do not include the effects of non-linear stress-strain relationship and therefore their predictions were found to be inaccurate. Therefore a new design rule that uses a nonlinearity factor, which is defined as the ratio of the limit of proportionality to the yield stress at a given temperature, was developed for cold-formed steel beams subject to lateral-torsional buckling at elevated temperatures. This thesis presents the details and results of the experimental and numerical studies conducted in this research including a comparison of results with predictions using available design rules. It also presents the recommendations made regarding the accuracy of current design rules as well as the new developed design rules for coldformed steel beams both at ambient and elevated temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, the problems resulting from unsustainable subdivision development have become significant problems in the Bangkok Metropolitan Region (BMR), Thailand. Numbers of government departments and agencies have tried to eliminate the problems by introducing the rating tools to encourage the higher sustainability levels of subdivision development in BMR, such as the Environmental Impact Assessment Monitoring Award (EIA-MA) and the Thai’s Rating for Energy and Environmental Sustainability of New construction and major renovation (TREES-NC). However, the EIA-MA has included the neighbourhood designs in the assessment criteria, but this requirement applies to large projects only. Meanwhile, TREES-NC has focused only on large scale buildings such as condominiums, office buildings, and is not specific for subdivision neighbourhood designs. Recently, the new rating tool named “Rating for Subdivision Neighbourhood Sustainability Design (RSNSD)” has been developed. Therefore, the validation process of RSNSD is still required. This paper aims to validate the new rating tool for subdivision neighbourhood design in BMR. The RSNSD has been validated by applying the rating tool to eight case study subdivisions. The result of RSNSD by data generated through surveying subdivisions will be compared to the existing results from the EIA-MA. The selected cases include of one “Excellent Award”, two “Very Good Award”, and five non-rated subdivision developments. This paper expects to prove the credibility of RSNSD before introducing to the real subdivision development practises. The RSNSD could be useful to encourage higher sustainability subdivision design level, and then protect the problems from further subdivision development in BMR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Validation is an important issue in the development and application of Bayesian Belief Network (BBN) models, especially when the outcome of the model cannot be directly observed. Despite this, few frameworks for validating BBNs have been proposed and fewer have been applied to substantive real-world problems. In this paper we adopt the approach by Pitchforth and Mengersen (2013), which includes nine validation tests that each focus on the structure, discretisation, parameterisation and behaviour of the BBNs included in the case study. We describe the process and result of implementing a validation framework on a model of a real airport terminal system with particular reference to its effectiveness in producing a valid model that can be used and understood by operational decision makers. In applying the proposed validation framework we demonstrate the overall validity of the Inbound Passenger Facilitation Model as well as the effectiveness of the validity framework itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to examine the reliability and validity of field tests for assessing physical function in mid-aged and young-old people (55-70 y). Tests were selected that required minimal space and equipment and could be implemented in multiple field settings such as a general practitioner's office. Nineteen participants completed 2 field and I laboratory testing sessions. Intra-class correlations showed good reliability for the tests of upper body strength (lift and reach, R=.66), lower body strength (sit to stand, R=.80) and functional capacity (Canadian Step Test, R=.92), but not for leg power (single timed chair rise, R=.28). There was also good reliability for the balance test during 3 stances: parallel (94.7% agreement), semi-tandem (73.7%), and tandem (52.6%). Comparison of field test results with objective laboratory measures found good validity for the sit to stand (cf 1RM leg press, Pearson r=.68, p <.05), and for the step test (cf PWC140, r = -.60, p <.001), but not for the lift and reach (cf 1RM bench press, r=.43, p >.05), balance (r=-.13, -.18, .23) and rate of force development tests (r=-.28). It was concluded that the lower body strength and cardiovascular function tests were appropriate for use in field settings with mid-aged and young-old adults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-touch interfaces across a wide range of hardware platforms are becoming pervasive. This is due to the adoption of smart phones and tablets in both the consumer and corporate market place. This paper proposes a human-machine interface to interact with unmanned aerial systems based on the philosophy of multi-touch hardware-independent high-level interaction with multiple systems simultaneously. Our approach incorporates emerging development methods for multi-touch interfaces on mobile platforms. A framework is defined for supporting multiple protocols. An open source solution is presented that demonstrates: architecture supporting different communications hardware; an extensible approach for supporting multiple protocols; and the ability to monitor and interact with multiple UAVs from multiple clients simultaneously. Validation tests were conducted to assess the performance, scalability and impact on packet latency under different client configurations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.