880 resultados para Força e flexibilidade
Resumo:
O presente trabalho pretende caracterizar a associação existente entre a função cognitiva executiva e a capacidade para o trabalho em profissionais de saúde (médicos e enfermeiros) e profissionais de educação (professores). A função cognitiva executiva é definida como uma série de processos cognitivos de ordem superior (capacidade de planeamento, raciocínio abstrato, flexibilidade cognitiva e resolução de problemas) determinantes no controlo e coordenação de operações cognitivas e fundamentais na organização e monitorização do comportamento humano. A integridade destas funções, são determinantes para a realização adequada de tarefas da vida diária, incluindo o contexto organizacional. A capacidade para trabalho é um forte preditor do desempenho laboral, sendo definida como a autoavaliação que o trabalhador faz do seu bem-estar no presente e no futuro próximo e da capacidade para assegurar o seu trabalho tendo em conta as exigências do mesmo, a saúde e os recursos psicológicos e cognitivos disponíveis. Assim, com o objetivo de compreender a relação entre estas duas variáveis em médicos, enfermeiros e professores, no presente trabalho utilizamos uma amostra composta por 218 sujeitos, sendo que 93 são enfermeiros, 100 professores (ensino secundário) e 25 médicos. Para avaliar as funções cognitivas executivas, nomeadamente a flexibilidade cognitiva e raciocínio abstrato não-verbal utilizamos o Halstead Category Test (HCT). Para avaliar a capacidade de planeamento e resolução de problemas, utilizamos a Torre de Hanoi (TH). Para determinamos o valor da capacidade para o trabalho, utilizamos o índice de capacidade para o trabalho. No sentido de controlar variáveis que poderiam influenciar esta relação, utilizamos Questionário Geral de Saúde (GHQ-12), escala de ansiedade-traço, Questionário de Personalidade de Eysenck, escala de satisfação no trabalho e uma questão dicotómica (Sim/Não) sobre o trabalho por turnos. Pela análise dos resultados, verificamos que alterações nas funções cognitivas executivas poderão prejudicar a capacidade para o trabalho. No entanto, verificamos que variáveis como a idade, trabalho por turnos, personalidade e saúde mental poderão exercer um efeito moderador desta relação. Por fim, em comparação com médicos, enfermeiros e professores, verificamos que os médicos e enfermeiros apresentam um maior prejuízo nas funções cognitivas executivas que os professores, mas não na capacidade para o trabalho. Como conclusão, o nosso trabalho contribuiu para uma melhor compreensão da ação das funções executivas em contexto laboral (em particular na área da saúde e educação), contribuindo para o desenvolvimento e implementação de programas de promoção de saúde laboral em contexto organizacional.
Resumo:
Flexible radio transmitters based on the Software-Defined Radio (SDR) concept are gaining an increased research importance due to the unparalleled proliferation of new wireless standards operating at different frequencies, using dissimilar coding and modulation schemes, and targeted for different ends. In this new wireless communications paradigm, the physical layer of the radio transmitter must be able to support the simultaneous transmission of multi-band, multi-rate, multi-standard signals, which in practice is very hard or very inefficient to implement using conventional approaches. Nevertheless, the last developments in this field include novel all-digital transmitter architectures where the radio datapath is digital from the baseband up to the RF stage. Such concept has inherent high flexibility and poses an important step towards the development of SDR-based transmitters. However, the truth is that implementing such radio for a real world communications scenario is a challenging task, where a few key limitations are still preventing a wider adoption of this concept. This thesis aims exactly to address some of these limitations by proposing and implementing innovative all-digital transmitter architectures with inherent higher flexibility and integration, and where improving important figures of merit, such as coding efficiency, signal-to-noise ratio, usable bandwidth and in-band and out-of-band noise will also be addressed. In the first part of this thesis, the concept of transmitting RF data using an entirely digital approach based on pulsed modulation is introduced. A comparison between several implementation technologies is also presented, allowing to state that FPGAs provide an interesting compromise between performance, power efficiency and flexibility, thus making them an interesting choice as an enabling technology for pulse-based all-digital transmitters. Following this discussion, the fundamental concepts inherent to pulsed modulators, its key advantages, main limitations and typical enhancements suitable for all-digital transmitters are also presented. The recent advances regarding the two most common classes of pulse modulated transmitters, namely the RF and the baseband-level are introduced, along with several examples of state-of-the-art architectures found on the literature. The core of this dissertation containing the main developments achieved during this PhD work is then presented and discussed. The first key contribution to the state-of-the-art presented here consists in the development of a novel ΣΔ-based all-digital transmitter architecture capable of multiband and multi-standard data transmission in a very flexible and integrated way, where the pulsed RF output operating in the microwave frequency range is generated inside a single FPGA device. A fundamental contribution regarding the simultaneous transmission of multiple RF signals is then introduced by presenting and describing novel all-digital transmitter architectures that take advantage of multi-gigabit data serializers available on current high-end FPGAs in order to transmit in a time-interleaved approach multiple independent RF carriers. Further improvements in this design approach allowed to provide a two-stage up-conversion transmitter architecture enabling the fine frequency tuning of concurrent multichannel multi-standard signals. Finally, further improvements regarding two key limitations inherent to current all-digital transmitter approaches are then addressed, namely the poor coding efficiency and the combined high quality factor and tunability requirements of the RF output filter. The followed design approach based on poliphase multipath circuits allowed to create a new FPGA-embedded agile transmitter architecture that significantly improves important figures of merit, such as coding efficiency and SNR, while maintains the high flexibility that is required for supporting multichannel multimode data transmission.
Resumo:
A evolução constante em novas tecnologias que providenciam suporte à forma como os nossos dispositivos se ligam, bem como a forma como utilizamos diferentes capacidades e serviços on-line, criou um conjunto sem precedentes de novos desafios que motivam o desenvolvimento de uma recente área de investigação, denominada de Internet Futura. Nesta nova área de investigação, novos aspectos arquiteturais estão ser desenvolvidos, os quais, através da re-estruturação de componentes nucleares subjacentesa que compõem a Internet, progride-a de uma forma capaz de não são fazer face a estes novos desafios, mas também de a preparar para os desafios de amanhã. Aspectos chave pertencendo a este conjunto de desafios são os ambientes de rede heterogéneos compostos por diferentes tipos de redes de acesso, a cada vez maior mudança do tráfego peer-to-peer (P2P) como o tipo de tráfego mais utilizado na Internet, a orquestração de cenários da Internet das Coisas (IoT) que exploram mecanismos de interação Maquinaa-Maquina (M2M), e a utilização de mechanismos centrados na informação (ICN). Esta tese apresenta uma nova arquitetura capaz de simultaneamente fazer face a estes desafios, evoluindo os procedimentos de conectividade e entidades envolvidas, através da adição de uma camada de middleware, que age como um mecanismo de gestão de controlo avançado. Este mecanismo de gestão de controlo aproxima as entidades de alto nível (tais como serviços, aplicações, entidades de gestão de mobilidade, operações de encaminhamento, etc.) com as componentes das camadas de baixo nível (por exemplo, camadas de ligação, sensores e atuadores), permitindo uma otimização conjunta dos procedimentos de ligação subjacentes. Os resultados obtidos não só sublinham a flexibilidade dos mecanismos que compoem a arquitetura, mas também a sua capacidade de providenciar aumentos de performance quando comparados com outras soluÇÕes de funcionamento especÍfico, enquanto permite um maior leque de cenáios e aplicações.
Resumo:
Nesta tese, desenvolvida no âmbito do Programa Doutoral em Química da Universidade de Aveiro, foram desenvolvidos novos receptores sintéticos construídos a partir da plataforma macrocíclica tetraazacalix[2]areno[2]triazina ou do fragmento de isoftalamida. Ambas as unidades estruturais foram decora-das com grupos de reconhecimento molecular baseados em grupos amida e/ou ureia com o objectivo de actuarem como receptores selectivos de aniões com importância biológica ou farmacológica, incluindo acetato, oxalato, malo-nato, succinato, glutarato, diglicolato, L- e D-NHBoc-alanina, (S)- e (R)-fenilpro-panoato, (S,S)- e (R,R)-tartarato, fumarato, maleato, Cl-, HCO3-, H2PO4-, HSO4- e SO42-. No Capítulo 1 é efectuada uma revisão bibliográfica dos desenvolvimentos recentes na síntese, caracterização estrutural e aplicações de receptores fun-cionais relacionados com os desenvolvidos no âmbito desta tese, com especial incidência para aqueles que foram estudados como receptores de aniões. Neste domínio, enquanto que receptores derivados da isoftalamida têm sido bastante estudados ao longo das últimas décadas, o desenvolvimento de receptores de aniões inspirados em heteracalix[2]areno[2]triazinas ainda se encontra a dar os primeiros passos. No Capítulo 2 é apresentada a síntese de quatro novos macrociclos derivados de tetraazacalix[2]areno[2]triazina incorporando um ou dois braços de L-alanina (A1, A2) ou de L-leucina (L1, L2) nos anéis benzénicos, e derivados com grupos amida como unidades de reconhecimento. Adicionalmente, são também apresentados dois novos azacalix[2]areno[2]triazinas contendo um (U1) ou dois (U2) braços com grupos ureia substituídos com um grupo (S)-metilbenzílico. Foram ainda preparados os macrociclos A2Me4 e U2Me4 por metilação dos átomos de azoto em ponte de A2 e U2, os quais foram posterior-mente utilizados em estudos de associação. Os compostos sintetizados foram caracterizados através de técnicas espectroscópicas, complementadas por difracção de raios X de cristal único no caso de U2Me4. O Capítulo 3 contempla os estudos de reconhecimento molecular entre os macrociclos A2Me4 e U2Me4 e os aniões derivados de ácidos mono- e dicarbo-xílicos alifáticos, ácidos carboxílicos isoméricos (enantiómeros e isómeros geométricos), aminoácidos e polioxaniões acima referidos, excepto HCO3-. Os estudos de associação foram realizados através de técnicas de titulação por RMN 1H com determinação das respectivas constantes de afinidade. Todas as associações estudadas apresentaram uma estequiometria receptor-substrato 1:1 com excepção das associações formadas entre A2Me4 e U2Me4 com H2PO4- (1:2). Os complexos A2Me4∙SO42- e U2Me4∙(H2PO4-)2 são os mais estáveis com constantes de associação de 7,4 × 104 M-1 e superior a 105 M-2, respectivamente. O reconhecimento dos dicarboxilatos ocorreu através dos dois braços do macrociclo, com os aniões com grupos carboxilato separados por cadeias alifáticas mais compridas (glutarato e diglicolato) apresentando um melhor ajuste aos braços de A2Me4 e U2Me4. Não foi observado reconheci-mento enantiosselectivo de aniões. Em contraste, as constantes de afinidade para as associações com os aniões dos isómeros cis (maleato) e trans (fumarato) do ácido but-2-enodióico, de 89 e 4920 M-1 para A2Me4 e 481 e 4007 M-1 para U2Me4, respectivamente, sugerem selectividade de ambos os receptores para o fumarato. No Capítulo 4 é descrita a síntese de nove receptores acíclicos incorporando a unidade de isoftalamida (Iso-1 a Iso-9) e braços laterais com grupos de reconhecimento de aniões. Enquanto que o receptor Iso-1 possui como unida-des de reconhecimento apenas grupos amida, os receptores Iso-2, Iso-3, Iso-5, Iso-6, Iso-7 e Iso-9 possuem grupos amida e ureia, e os derivados Iso-4 e Iso-8 grupos amida e sulfonilureia. Em cada um destes compostos, os grupos de reconhecimento estão separados por uma cadeia etilénica cuja flexibilidade confere um melhor ajuste com os aniões. Os derivados de isoftalamida preparados foram caracterizados através de técnicas espectroscópicas. No Capítulo 5 são apresentados os estudos de associação realizados por técnicas de titulação por RMN 1H entre Iso-1, Iso-2, Iso-4, Iso-6 e Iso-8 com os aniões H2PO4-, HCO3-, Cl- e oxalato. Os receptores Iso-1, Iso-2 e Iso-6 apresentaram maior afinidade para o dianião, com valores de Kass de 6100, 7800 e 9800 M-1 respectivamente, e menor para Cl- (17 < Kass < 19 M-1). Foram sempre formadas associações mais estáveis com H2PO4- (294 < Kass < 427 M-1) comparativamente a HCO3-, sendo que a associação mais forte com este últi-mo foi determinada com Iso-2 (Kass = 95 M-1). As moléculas de Iso-4 e Iso-8 sofreram desprotonação dos grupos sulfonilureia na presença de todos os aniões excepto de Cl-. No Capítulo 6 apresentam-se as conclusões gerais e no Capítulo 7 descrevem-se os procedimentos experimentais e também os dados espectroscópicos dos produtos obtidos.
Resumo:
In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.
Resumo:
The genetic code is not universal. Alterations to its standard form have been discovered in both prokaryotes and eukaryotes and demolished the dogma of an immutable code. For instance, several Candida species translate the standard leucine CUG codon as serine. In the case of the human pathogen Candida albicans, a serine tRNA (tRNACAGSer) incorporates in vivo 97% of serine and 3% of leucine in proteins at CUG sites. Such ambiguity is flexible and the level of leucine incorporation increases significantly in response to environmental stress. To elucidate the function of such ambiguity and clarify whether the identity of the CUG codon could be reverted from serine back to leucine, we have developed a forced evolution strategy to increase leucine incorporation at CUGs and a fluorescent reporter system to monitor such incorporation in vivo. Leucine misincorporation increased from 3% up to nearly 100%, reverting CUG identity from serine back to leucine. Growth assays showed that increasing leucine incorporation produced impressive arrays of phenotypes of high adaptive potential. In particular, strains with high levels of leucine misincorporation exhibited novel phenotypes and high level of tolerance to antifungals. Whole genome re-sequencing revealed that increasing levels of leucine incorporation were associated with accumulation of single nucleotide polymorphisms (SNPs) and loss of heterozygozity (LOH) in the higher misincorporating strains. SNPs accumulated preferentially in genes involved in cell adhesion, filamentous growth and biofilm formation, indicating that C. albicans uses its natural CUG ambiguity to increase genetic diversity in pathogenesis and drug resistance related processes. The overall data provided evidence for unantecipated flexibility of the C. albicans genetic code and highlighted new roles of codon ambiguity on the evolution of genetic and phenotypic diversity.
Resumo:
As comunidades de prática (CoP), bem como a investigação-ação, têm vindo a ser apontadas na literatura como formas de promover o desenvolvimento profissional de professores, potenciando a melhoria das suas práticas letivas. Contudo, as evidências empíricas relativas às práticas letivas desenvolvidas por professores no âmbito dessas configurações sociais são escassas. Neste estudo procura-se contribuir para colmatar essa lacuna ao analisar uma CoP online, que envolveu professores de ciências e investigadores em Educação em Ciência (EC) e se constituiu no âmbito do projeto “Investigação e práticas lectivas em Educação em Ciência: Dinâmicas de interacção” (IPEC), com enfoques distintos, a que se alude abaixo. A investigação realizada envolveu uma metodologia de natureza predominantemente qualitativa, descritiva, exploratória e do tipo estudo de caso único, sendo o caso as práticas letivas desenvolvidas pelos membros da CoP referida e as dinâmicas de interação entre os mesmos. Como técnicas de recolha de dados, recorreu-se principalmente à observação mediada pela plataforma online de apoio ao desenvolvimento do projeto (dados estatísticos e as mensagens registadas automaticamente) e recolha de documentos. Quanto às técnicas de análise de dados, optou-se principalmente pela análise de conteúdo e análise documental interna, tendo-se triangulado dados de diferentes fontes. Com base no Interconnected Model of Teacher Professional Growth, que Clarke e Hollingsworth propuseram em 2002, e em instrumentos de análise resultantes da revisão de literatura, adaptados aos enfoques definidos, a análise da CoP selecionada incidiu sobre: a) os domínios externo e das práticas de desenvolvimento curricular (DC), ou seja, as suas dinâmicas de interação durante dois anos letivos; b) o domínio das consequências nas práticas letivas, no que concerne às estratégias de ensino de ciências desenvolvidas; c) a evidências do seu carater inovador; e d) aos princípios de DC operacionalizados através do módulo curricular desenhado, implementado, avaliado e disseminado pelos membros da CoP. Os resultados indicam que a) a participação dos membros variou ao longo do período de interação e que a sua dinâmica se enquadra numa adaptação das fases de desenvolvimento de CoP proposta por Wenger e colegas em 2002, com dois ciclos de investigação-ação; b) a CoP desenvolveu estratégias de ensino diversificadas, pouco exploradas por professores e coerentes com diversas recomendações da literatura, de forma consistente; c) as práticas letivas são inovadoras, do tipo challenging, tendo incluído o envolvimento de professores que lecionavam nas escolas dos professores membros da CoP; e d) a CoP operacionalizou vários princípios de DC recomendados na literatura, nomeadamente a flexibilidade e diferenciação. Os resultados empíricos permitiram ainda validar as dimensões do modelo de Clarke e Hollingsworth, assim como adaptar à especificidade do caso analisado.Pelo acima referido, embora reconhecendo as limitações do estudo, nomeadamente relativas às opções metodológicas efetuadas, foi possível inferir que o trabalho realizado no seio desta CoP online de professores e investigadores contribuiu para a inovação e melhoria de práticas letivas de EC. Do estudo resultam ainda instrumentos de análise que se consideram relevantes, dado poderem vir a ser usados em investigações futuras, assim como poderem vir a orientar professores de ciências que desejem alinhar as suas práticas de ensino com recomendações da investigação em EC. Por fim, são apresentadas recomendações relativamente ao envolvimento de professores e investigadores em CoP online, no âmbito da EC, assim como relativamente a possibilidades ao nível de investigações futuras, nomeadamente a validação dos instrumentos e das recomendações apresentadas em contextos mais abrangentes e transversais.
Resumo:
Wireless communication technologies have become widely adopted, appearing in heterogeneous applications ranging from tracking victims, responders and equipments in disaster scenarios to machine health monitoring in networked manufacturing systems. Very often, applications demand a strictly bounded timing response, which, in distributed systems, is generally highly dependent on the performance of the underlying communication technology. These systems are said to have real-time timeliness requirements since data communication must be conducted within predefined temporal bounds, whose unfulfillment may compromise the correct behavior of the system and cause economic losses or endanger human lives. The potential adoption of wireless technologies for an increasingly broad range of application scenarios has made the operational requirements more complex and heterogeneous than before for wired technologies. On par with this trend, there is an increasing demand for the provision of cost-effective distributed systems with improved deployment, maintenance and adaptation features. These systems tend to require operational flexibility, which can only be ensured if the underlying communication technology provides both time and event triggered data transmission services while supporting on-line, on-the-fly parameter modification. Generally, wireless enabled applications have deployment requirements that can only be addressed through the use of batteries and/or energy harvesting mechanisms for power supply. These applications usually have stringent autonomy requirements and demand a small form factor, which hinders the use of large batteries. As the communication support may represent a significant part of the energy requirements of a station, the use of power-hungry technologies is not adequate. Hence, in such applications, low-range technologies have been widely adopted. In fact, although low range technologies provide smaller data rates, they spend just a fraction of the energy of their higher-power counterparts. The timeliness requirements of data communications, in general, can be met by ensuring the availability of the medium for any station initiating a transmission. In controlled (close) environments this can be guaranteed, as there is a strict regulation of which stations are installed in the area and for which purpose. Nevertheless, in open environments, this is hard to control because no a priori abstract knowledge is available of which stations and technologies may contend for the medium at any given instant. Hence, the support of wireless real-time communications in unmanaged scenarios is a highly challenging task. Wireless low-power technologies have been the focus of a large research effort, for example, in the Wireless Sensor Network domain. Although bringing extended autonomy to battery powered stations, such technologies are known to be negatively influenced by similar technologies contending for the medium and, especially, by technologies using higher power transmissions over the same frequency bands. A frequency band that is becoming increasingly crowded with competing technologies is the 2.4 GHz Industrial, Scientific and Medical band, encompassing, for example, Bluetooth and ZigBee, two lowpower communication standards which are the base of several real-time protocols. Although these technologies employ mechanisms to improve their coexistence, they are still vulnerable to transmissions from uncoordinated stations with similar technologies or to higher power technologies such as Wi- Fi, which hinders the support of wireless dependable real-time communications in open environments. The Wireless Flexible Time-Triggered Protocol (WFTT) is a master/multi-slave protocol that builds on the flexibility and timeliness provided by the FTT paradigm and on the deterministic medium capture and maintenance provided by the bandjacking technique. This dissertation presents the WFTT protocol and argues that it allows supporting wireless real-time communication services with high dependability requirements in open environments where multiple contention-based technologies may dispute the medium access. Besides, it claims that it is feasible to provide flexible and timely wireless communications at the same time in open environments. The WFTT protocol was inspired on the FTT paradigm, from which higher layer services such as, for example, admission control has been ported. After realizing that bandjacking was an effective technique to ensure the medium access and maintenance in open environments crowded with contention-based communication technologies, it was recognized that the mechanism could be used to devise a wireless medium access protocol that could bring the features offered by the FTT paradigm to the wireless domain. The performance of the WFTT protocol is reported in this dissertation with a description of the implemented devices, the test-bed and a discussion of the obtained results.
Resumo:
The development of computed tomography systems with energy resolving detectors is a current challenge in medical physics and biomedical engineering. A computed tomography system of this kind allows getting complementary informations relatively to conventional systems, that can help the medical diagnosis, being of great interest in medicine. The work described in this thesis is related to the development of a computed tomography system using micropattern gaseous detectors, which allow storing, simultaneously, information about the interaction position and the energy of each single photon that interacts with the detector. This kind of detectors has other advantages concerning the cost and characteristics of operation when compared with solid state detectors. Tomographic acquisitions were performed using a MicroHole & Strip Plate based detector, which allowed reconstructing cross-sectional images using energy windows, applying the energy weighting technique and performing multi-slice and tri-dimensional reconstructions. The contrast-to-noise ratio was improved by 31% by applying the energy weighting technique, comparing with the corresponding image obtained with the current medical systems. A prototype of a computed tomography with flexibility to change the detector was developed, making it possible to apply different detectors based on Thick-COBRA. Several images acquired with these detectors are presented and demonstrate their applicability in X-ray imaging. When operating in NeCH4, the detector allowed a charge gain of 8 104, an energy resolution of 20% (full width at half maximum at 8 keV), a count rate of 1 106 Hz/mm2, a very stable operation (gain fluctuations below 5%) and a spacial resolution of 1.2 mm for an energy photon of 3.6 keV. Operating the detector in pure Kr allowed increasing the detection efficiency and achieving a charge gain of 2 104, an energy resolution of 32% (full width at half maximum at 22 keV), a count rate of 1 105 Hz/mm2, very stable operation and a spatial resolution of 500 m. The software already existing in the group was improved and tools to correct geometric misalignments of the system were also developed. The reconstructions obtained after geometrical correction are free of artefacts due to the referred misalignments.
Resumo:
The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.
Resumo:
Nowadays, communication environments are already characterized by a myriad of competing and complementary technologies that aim to provide an ubiquitous connectivity service. Next Generation Networks need to hide this heterogeneity by providing a new abstraction level, while simultaneously be aware of the underlying technologies to deliver richer service experiences to the end-user. Moreover, the increasing interest for group-based multimedia services followed by their ever growing resource demands and network dynamics, has been boosting the research towards more scalable and exible network control approaches. The work developed in this Thesis enables such abstraction and exploits the prevailing heterogeneity in favor of a context-aware network management and adaptation. In this scope, we introduce a novel hierarchical control framework with self-management capabilities that enables the concept of Abstract Multiparty Trees (AMTs) to ease the control of multiparty content distribution throughout heterogeneous networks. A thorough evaluation of the proposed multiparty transport control framework was performed in the scope of this Thesis, assessing its bene ts in terms of network selection, delivery tree recon guration and resource savings. Moreover, we developed an analytical study to highlight the scalability of the AMT concept as well as its exibility in large scale networks and group sizes. To prove the feasibility and easy deployment characteristic of the proposed control framework, we implemented a proof-of-concept demonstrator that comprehends the main control procedures conceptually introduced. Its outcomes highlight a good performance of the multiparty content distribution tree control, including its local and global recon guration. In order to endow the AMT concept with the ability to guarantee the best service experience by the end-user, we integrate in the control framework two additional QoE enhancement approaches. The rst employs the concept of Network Coding to improve the robustness of the multiparty content delivery, aiming at mitigating the impact of possible packet losses in the end-user service perception. The second approach relies on a machine learning scheme to autonomously determine at each node the expected QoE towards a certain destination. This knowledge is then used by di erent QoE-aware network management schemes that, jointly, maximize the overall users' QoE. The performance and scalability of the control procedures developed, aided by the context and QoE-aware mechanisms, show the advantages of the AMT concept and the proposed hierarchical control strategy for the multiparty content distribution with enhanced service experience. Moreover we also prove the feasibility of the solution in a practical environment, and provide future research directions that bene t the evolved control framework and make it commercially feasible.
Resumo:
Dissertação de mest., Recursos Hídricos, 2007, Faculdade de Engenharia de Recursos Naturais, Universidade do Algarve
Resumo:
O elevado grau de fluidez da quadra tradicional – que participa da mutabilidade e flexibilidade características de toda a literatura de transmissão oral – faz dela um espaço complexo, movediço e resistente a quadros taxinómicos rígidos ou definitivos. Partindo desse pressuposto, quisemos determinar neste artigo as principais linhas de instabilidade desta forma poética e os seus principais processos de edificação, nos planos estrutural e formal. Procurámos demonstrar que a quadra encerra uma força de conflito decorrente de uma dialéctica de abertura e de fechamento, relacionada com impulsos quer de condensação e fixação quer de intensificação ou derivação de sentidos.
Resumo:
Tese de dout., Engenharia Electrónica e Computação, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2003
Resumo:
Tese de dout., Economia (Economia da Informação), Faculdade de Economia, Univ. do Algarve, 2004