949 resultados para Embedded processor
Resumo:
Healthcare organizations often benefit from information technologies as well as embedded decision support systems, which improve the quality of services and help preventing complications and adverse events. In Centro Materno Infantil do Norte (CMIN), the maternal and perinatal care unit of Centro Hospitalar of Oporto (CHP), an intelligent pre-triage system is implemented, aiming to prioritize patients in need of gynaecology and obstetrics care in two classes: urgent and consultation. The system is designed to evade emergency problems such as incorrect triage outcomes and extensive triage waiting times. The current study intends to improve the triage system, and therefore, optimize the patient workflow through the emergency room, by predicting the triage waiting time comprised between the patient triage and their medical admission. For this purpose, data mining (DM) techniques are induced in selected information provided by the information technologies implemented in CMIN. The DM models achieved accuracy values of approximately 94% with a five range target distribution, which not only allow obtaining confident prediction models, but also identify the variables that stand as direct inducers to the triage waiting times.
Resumo:
Hospitals have multiple data sources, such as embedded systems, monitors and sensors. The number of data available is increasing and the information are used not only to care the patient but also to assist the decision processes. The introduction of intelligent environments in health care institutions has been adopted due their ability to provide useful information for health professionals, either in helping to identify prognosis or also to understand patient condition. Behind of this concept arises this Intelligent System to track patient condition (e.g. critic events) in health care. This system has the great advantage of being adaptable to the environment and user needs. The system is focused in identifying critic events from data streaming (e.g. vital signs and ventilation) which is particularly valuable for understanding the patient’s condition. This work aims to demonstrate the process of creating an intelligent system capable of operating in a real environment using streaming data provided by ventilators and vital signs monitors. Its development is important to the physician because becomes possible crossing multiple variables in real-time by analyzing if a value is critic or not and if their variation has or not clinical importance.
Resumo:
Dissertação de mestrado em Ciências da Linguagem
Resumo:
Tese de Doutoramento em Estudos da Criança (área de especialização em Sociologia da Infância).
Resumo:
Dissertação de mestrado em Português Língua Não Materna (PLNM): Português Língua Estrangeira (PLE) Português Língua Segunda (PL2)
Resumo:
O estudo das condições que potenciam o ensino-aprendizagem em educação física precisa de considerar mais consistentemente e explicitamente as evidências do mesossistema estabelecido entre as ecologias do trabalho colaborativo do grupo disciplinar e as das suas aulas. Especificamente, incita-se à compreensão como a negociação integradora do sistema social dos alunos pode ser potenciada pelo grupo disciplinar. Para alcançar essa compreensão articularam-se os modelos das comunidades de aprendizagem profissional e da ecologia da aula, através da bioecologia do desenvolvimento humano. Conduziu-se um desenho de estudo de caso longitudinal integrado, triangulando métodos, fontes e dados de um grupo disciplinar de educação física com qualidade colaborativa. Simultaneamente, observaram-se duas ecologias de aulas de educação física, diferenciadas pelas disposições de negociação dos respetivos professores, detalhando paralelamente os perfis de agenda social dos alunos para essas ecologias. Os resultados permitem salientar a interação das propriedades Pessoa-Contexto-Processo-Tempo como condições mesossistémicas favorecedoras da integração do sistema social dos alunos, nomeadamente: as caraterísticas do grupo como comunidade de aprendizagem profissional focada na integração; a interação macrossistémica e a alternância intencional cíclica grupo-aula; e a articulação entre o desenvolvimento curricular colaborativo e a partilha e produção de conhecimento em espirais curriculares plurianuais e anuais. Estas condições refletiram-se nas ecologias das aulas como semelhanças tendencialmente integradoras do sistema social na instrução e na organização. Todavia, também emergiram particularidades em cada ecologia traduzidas numa congruência mais consistente entre os perfis de agenda social encontrados para o envolvimento integrador, por associação às diferenças de alinhamento instrucional na negociação pelos professores. Este estudo lança implicações para a investigação relacionadas com a verificação e aprofundamento das condições mesossistémicas identificadas. Paralelamente, a prática profissional de professores e de formadores de professores encontra implicações sobre a melhoria da qualidade colaborativa profissional para promover continuamente melhores experiências de ensino-aprendizagem da educação física nos microssistemas aqui analisados.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
A composting Heat Extraction Unit (HEU) was designed to utilise waste heat from decaying organic matter for a variety of heating application The aim was to construct an insulated small scale, sealed, organic matter filled container. In this vessel a process fluid within embedded pipes would absorb thermal energy from the hot compost and transport it to an external heat exchanger. Experiments were conducted on the constituent parts and the final design comprised of a 2046 litre container insulated with polyurethane foam and kingspan with two arrays of qualpex piping embedded in the compost to extract heat. The thermal energy was used in horticultural trials by heating polytunnels using a radiator system during a winter/spring period. The compost derived energy was compared with conventional and renewable energy in the form of an electric fan heater and solar panel. The compost derived energy was able to raise polytunnel temperatures to 2-3°C above the control, with the solar panel contributing no thermal energy during the winter trial and the electric heater the most efficient maintaining temperature at its preset temperature of 10°C. Plants that were cultivated as performance indicators showed no significant difference in growth rates between the heat sources. A follow on experiment conducted using special growing mats for distributing compost thermal energy directly under the plants (Radish, Cabbage, Spinach and Lettuce) displayed more successful growth patterns than those in the control. The compost HEU was also used for more traditional space heating and hot water heating applications. A test space was successfully heated over two trials with varying insulation levels. Maximum internal temperature increases of 7°C and 13°C were recorded for building U-values of 1.6 and 0.53 W/m2K respectively using the HEU. The HEU successfully heated a 60 litre hot water cylinder for 32 days with maximum water temperature increases of 36.5°C recorded. Total energy recovered from the 435 Kg of compost within the HEU during the polytunnel growth trial was 76 kWh which is 3 kWh/day for the 25 days when the HEU was activated. With a mean coefficient of performance level of 6.8 calculated for the HEU the technology is energy efficient. Therefore the compost HEU developed here could be a useful renewable energy technology particularly for small scale rural dwellers and growers with access to significant quantities of organic matter
Resumo:
The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.
Resumo:
Walking robot, legged robot, walking machine, walking vehicle, control system, force control, technical application, kinematics distributed control, embedded system
Resumo:
Operating system, program families, aspect-oriented programming, aspectC++, embedded systems, PURE operating system family
Resumo:
The modern computer systems that are in use nowadays are mostly processor-dominant, which means that their memory is treated as a slave element that has one major task – to serve execution units data requirements. This organization is based on the classical Von Neumann's computer model, proposed seven decades ago in the 1950ties. This model suffers from a substantial processor-memory bottleneck, because of the huge disparity between the processor and memory working speeds. In order to solve this problem, in this paper we propose a novel architecture and organization of processors and computers that attempts to provide stronger match between the processing and memory elements in the system. The proposed model utilizes a memory-centric architecture, wherein the execution hardware is added to the memory code blocks, allowing them to perform instructions scheduling and execution, management of data requests and responses, and direct communication with the data memory blocks without using registers. This organization allows concurrent execution of all threads, processes or program segments that fit in the memory at a given time. Therefore, in this paper we describe several possibilities for organizing the proposed memory-centric system with multiple data and logicmemory merged blocks, by utilizing a high-speed interconnection switching network.
Resumo:
A detailed description of the morphology of the digestive organs of Enteroctopus megalocyathus (Gould, 1852) and Loligo sanpaulensis Brakoniecki, 1984 is given. The mandibles, the crop diverticulum, a doubly coiled caecum, the loop of the medium intestine and the appendages of the digestive gland are first described for E. megalocyathus. The most outstanding finding in L. sanpaulensis is the location of the single posterior salivary gland, wholly embedded in the digestive gland.
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.
Resumo:
The parameterized expectations algorithm (PEA) involves a long simulation and a nonlinear least squares (NLS) fit, both embedded in a loop. Both steps are natural candidates for parallelization. This note shows that parallelization can lead to important speedups for the PEA. I provide example code for a simple model that can serve as a template for parallelization of more interesting models, as well as a download link for an image of a bootable CD that allows creation of a cluster and execution of the example code in minutes, with no need to install any software.