936 resultados para Architectures profondes
Resumo:
Despite the strong influence of plant architecture on crop yield, most crop models either ignore it or deal with it in a very rudimentary way. This paper demonstrates the feasibility of linking a model that simulates the morphogenesis and resultant architecture of individual cotton plants with a crop model that simulates the effects of environmental factors on critical physiological processes and resulting yield in cotton. First the varietal parameters of the models were made concordant. Then routines were developed to allocate the flower buds produced each day by the crop model amongst the potential positions generated by the architectural model. This allocation is done according to a set of heuristic rules. The final weight of individual bolls and the shedding of buds and fruit caused by water, N, and C stresses are processed in a similar manner. Observations of the positions of harvestable fruits, both within and between plants, made under a variety of agronomic conditions that had resulted in a broad range of plant architectures were compared to those predicted by the model with the same environmental inputs. As illustrated by comparisons of plant maps, the linked models performed reasonably well, though performance of the fruiting point allocation and shedding algorithms could probably be improved by further analysis of the spatial relationships of retained fruit. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A partir de 1998, teve in??cio, na Secretaria da Fazenda do Estado de Pernambuco, o Programa de Moderniza????o Fazend??ria (Promofaz), que promoveu diversas mudan??as organizacionais, concentrando esfor??os em projetos estruturadores, incluindo investimentos no planejamento de tecnologia da informa????o (TI). Dentre os projetos de TI, destacou-se a elabora????o das arquiteturas de sistemas e tecnol??gicas. Neste trabalho, foi realizada uma revis??o dos processos organizacionais da moderniza????o fazend??ria, abrangendo aspectos de cultura, mudan??a e aprendizagem organizacionais, confrontados com referenciais hist??ricos e te??ricos, e maior enfoque na ??rea de tecnologia da informa????o. Apesar das dificuldades, o processo de informatiza????o tem provocado fortes impactos nos componentes culturais da institui????o. Sem a utiliza????o da tecnologia da informa????o como ferramenta, provavelmente n??o se implantariam os novos modelos de gest??o adotados pela Sefaz, mas se observou que a tecnologia, sozinha, n??o faz o milagre da mudan??a, sendo necess??rio todo um conjunto de esfor??os e um trabalho paralelo com outros fatores de mudan??a.
Resumo:
Based on a previously developed mathematical model for fuel consumption of a modular car, here we discuss the cross impacts of engineering scenarios vs. flexibility in use for modular vehicle architectures to achieve the reduction of CO2 emissions targeted by the European Union, in 2009. A systems perspective is adopted in conceptualizing a modular architecture of vehicles. From a theoretical viewpoint, we found modular architecture of vehicles a potential design strategy to minimize fuel inefficiencies and, thus, a strategy for design for environment.
Resumo:
The integration and composition of software systems requires a good architectural design phase to speed up communications between (remote) components. However, during implementation phase, the code to coordinate such components often ends up mixed in the main business code. This leads to maintenance problems, raising the need for, on the one hand, separating the coordination code from the business code, and on the other hand, providing mechanisms for analysis and comprehension of the architectural decisions once made. In this context our aim is at developing a domain-specific language, CoordL, to describe typical coordination patterns. From our point of view, coordination patterns are abstractions, in a graph form, over the composition of coordination statements from the system code. These patterns would allow us to identify, by means of pattern-based graph search strategies, the code responsible for the coordination of the several components in a system. The recovering and separation of the architectural decisions for a better comprehension of the software is the main purpose of this pattern language
Resumo:
This paper proposes a wireless EEG acquisition platform based on Open Multimedia Architecture Platform (OMAP) embedded system. A high-impedance active dry electrode was tested for improving the scalp- electrode interface. It was used the sigma-delta ADS1298 analog-to-digital converter, and developed a “kernelspace” character driver to manage the communications between the converter unit and the OMAP’s ARM core. The acquired EEG signal data is processed by a “userspace” application, which accesses the driver’s memory, saves the data to a SD-card and transmits them through a wireless TCP/IP-socket to a PC. The electrodes were tested through the alpha wave replacement phenomenon. The experimental results presented the expected alpha rhythm (8-13 Hz) reactiveness to the eyes opening task. The driver spends about 725 μs to acquire and store the data samples. The application takes about 244 μs to get the data from the driver and 1.4 ms to save it in the SD-card. A WiFi throughput of 12.8Mbps was measured which results in a transmission time of 5 ms for 512 kb of data. The embedded system consumes about 200 mAh when wireless off and 400 mAh when it is on. The system exhibits a reliable performance to record EEG signals and transmit them wirelessly. Besides the microcontroller-based architectures, the proposed platform demonstrates that powerful ARM processors running embedded operating systems can be programmed with real-time constrains at the kernel level in order to control hardware, while maintaining their parallel processing abilities in high level software applications.
Resumo:
Several studies suggest that computer-mediated communication can lead to decreases in group effectiveness and reduce satisfaction levels in terms of trust and comfort of its users. Supported by an experiment, where the emotional or affective aspects of communication were tested with the experimentation of two architectures, Direct Communication Architecture (DCA) and the Virtual Communication Architecture (VCA) this paper validates the thesis that, from the users’ perspective, there is no opposition to the acceptance of virtual environments and interfaces for communication, and that these environments are able to cope with the reconfiguration dynamics requirements of virtual teams or client-server relations in a virtual enterprise operation.
Resumo:
Joining efforts of academic and corporate teams, we developed an integration architecture - MULTIS - that enables corporate e-learning managers to use a Learning Management System (LMS) for management of educational activities in virtual worlds. This architecture was then implemented for the Formare LMS. In this paper we present this architecture and concretizations of its implementation for the Second Life Grid/OpenSimulator virtual world platforms. Current systems are focused on activities managed by individual trainers, rather than groups of trainers and large numbers of trainees: they focus on providing the LMS with information about educational activities taking place in a virtual world and/or being able to access within the virtual world some of the information stored in the LMS, and disregard the streamlining of activity setup and data collection in multi-trainer contexts, among other administrative issues. This architecture aims to overcome the limitations of existing systems for organizational management of corporate e-learning activities.
Resumo:
In this paper we present results on the optimization of device architectures for colour and imaging applications, using a device with a TCO/pinpi'n/TCO configuration. The effect of the applied voltage on the color selectivity is discussed. Results show that the spectral response curves demonstrate rather good separation between the red, green and blue basic colors. Combining the information obtained under positive and negative applied bias a colour image is acquired without colour filters or pixel architecture. A low level image processing algorithm is used for the colour image reconstruction.
Resumo:
Plain radiography still accounts for the vast majority of imaging studies that are performed at multiple clinical instances. Digital detectors are now prominent in many imaging facilities and they are the main driving force towards filmless environments. There has been a working paradigm shift due to the functional separation of acquisition, visualization, and storage with deep impact in the imaging workflows. Moreover with direct digital detectors images are made available almost immediately. Digital radiology is now completely integrated in Picture Archiving and Communication System (PACS) environments governed by the Digital Imaging and Communications in Medicine (DICOM) standard. In this chapter a brief overview of PACS architectures and components is presented together with a necessarily brief account of the DICOM standard. Special focus is given to the DICOM digital radiology objects and how specific attributes may now be used to improve and increase the metadata repository associated with image data. Regular scrutiny of the metadata repository may serve as a valuable tool for improved, cost-effective, and multidimensional quality control procedures.
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files
Resumo:
Trabalho de projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Esta dissertação visa o desenvolvimento de um sistema de busca e salvamento baseado em múltiplos veículos terrestres, utilizando para tal os veículos LINCE do Laboratório de Sistemas Autónomos. Tendo como principal propósito conferir autonomia aos veículos, foram estudados possíveis cenários de actuação, para determinar as principais funcionalidades requeridas do sistema. Foram também estudadas metodologias de análise e caracterização de sistemas multirobóticos, baseadas no estado da arte existente, e foi elaborada a arquitectura conceptual do sistema e dos veículos a desenvolver. A preparação dos veículos abordou o estudo das possíveis soluções sensoriais e de actuação, e o desenvolvimento de uma arquitectura de hardware capaz de interligar todos os periféricos dos mesmos. Foram adaptados novos sensores e actuadores, e desenvolvidos alguns desses sensores. Para a interligação e manutenção dos mesmos foram ainda desenvolvidos novos periféricos de interface e controlo, e periféricos de gestão de energia. Por fim, foi ainda adaptado um gestor de missões nos veículos, capaz de receber a especificação das mesmas.
Resumo:
Neste trabalho propus-me realizar um Sistema de Aquisição de Dados em Tempo Real via Porta Paralela. Para atingir com sucesso este objectivo, foi realizado um levantamento bibliográfico sobre sistemas operativos de tempo real, salientando e exemplificando quais foram marcos mais importantes ao longo da sua evolução. Este levantamento permitiu perceber o porquê da proliferação destes sistemas face aos custos que envolvem, em função da sua aplicação, bem como as dificuldades, científicas e tecnológicas, que os investigadores foram tendo, e que foram ultrapassando com sucesso. Para que Linux se comporte como um sistema de tempo real, é necessário configura-lo e adicionar um patch, como por exemplo o RTAI ou ADEOS. Como existem vários tipos de soluções que permitem aplicar as características inerentes aos sistemas de tempo real ao Linux, foi realizado um estudo, acompanhado de exemplos, sobre o tipo de arquitecturas de kernel mais utilizadas para o fazer. Nos sistemas operativos de tempo real existem determinados serviços, funcionalidades e restrições que os distinguem dos sistemas operativos de uso comum. Tendo em conta o objectivo do trabalho, e apoiado em exemplos, fizemos um pequeno estudo onde descrevemos, entre outros, o funcionamento escalonador, e os conceitos de latência e tempo de resposta. Mostramos que há apenas dois tipos de sistemas de tempo real o ‘hard’ que tem restrições temporais rígidas e o ‘soft’ que engloba as restrições temporais firmes e suaves. As tarefas foram classificadas em função dos tipos de eventos que as despoletam, e evidenciando as suas principais características. O sistema de tempo real eleito para criar o sistema de aquisição de dados via porta paralela foi o RTAI/Linux. Para melhor percebermos o seu comportamento, estudamos os serviços e funções do RTAI. Foi dada especial atenção, aos serviços de comunicação entre tarefas e processos (memória partilhada e FIFOs), aos serviços de escalonamento (tipos de escalonadores e tarefas) e atendimento de interrupções (serviço de rotina de interrupção - ISR). O estudo destes serviços levou às opções tomadas quanto ao método de comunicação entre tarefas e serviços, bem como ao tipo de tarefa a utilizar (esporádica ou periódica). Como neste trabalho, o meio físico de comunicação entre o meio ambiente externo e o hardware utilizado é a porta paralela, também tivemos necessidade de perceber como funciona este interface. Nomeadamente os registos de configuração da porta paralela. Assim, foi possível configura-lo ao nível de hardware (BIOS) e software (módulo do kernel) atendendo aos objectivos do presente trabalho, e optimizando a utilização da porta paralela, nomeadamente, aumentando o número de bits disponíveis para a leitura de dados. No desenvolvimento da tarefa de hard real-time, foram tidas em atenção as várias considerações atrás referenciadas. Foi desenvolvida uma tarefa do tipo esporádica, pois era pretendido, ler dados pela porta paralela apenas quando houvesse necessidade (interrupção), ou seja, quando houvesse dados disponíveis para ler. Desenvolvemos também uma aplicação para permitir visualizar os dados recolhidos via porta paralela. A comunicação entre a tarefa e a aplicação é assegurada através de memória partilhada, pois garantindo a consistência de dados, a comunicação entre processos do Linux e as tarefas de tempo real (RTAI) que correm ao nível do kernel torna-se muito simples. Para puder avaliar o desempenho do sistema desenvolvido, foi criada uma tarefa de soft real-time cujos tempos de resposta foram comparados com os da tarefa de hard real-time. As respostas temporais obtidas através do analisador lógico em conjunto com gráficos elaborados a partir destes dados, mostram e comprovam, os benefícios do sistema de aquisição de dados em tempo real via porta paralela, usando uma tarefa de hard real-time.
Resumo:
A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).
Resumo:
Conferência: IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors (ASAP)- Jun 05-07, 2013