30 resultados para emisi??n radiof??nica


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A morte é, porventura, o grande tabu da sociedade ocidental contemporânea, fenómeno com clara ressonância na forma como os sobreviventes vivenciam o luto por perda de figura significativa. Numa sociedade que se mantém à margem da morte, as emoções decorrentes do luto são escamoteadas e reprimidas, com sérios riscos para a saúde mental dos enlutados. Essa conspiração do silêncio desagua também nos contextos educativos, onde os valores da juventude, do bem-estar, do prazer e da felicidade, quase não deixam espaço para o sofrimento e a morte. Deste modo, este estudo pretende ser uma contribuição para a compreensão dos efeitos do processo de luto em alunos adolescentes, nomeadamente ao nível do seu desempenho escolar, e, concomitantemente, para a análise do tipo de apoio que a comunidade educativa proporciona a esses alunos, com enfoque no papel dos professores (em geral) e dos diretores de turma (em particular). Pretende ainda apresentar estratégias interventivas, a implementar nas escolas, potenciadoras de uma educação para a vida, mesmo em circunstâncias de morte, e para a gestão do luto, que se revelem promotoras de um lidar pedagógico inclusivo. Sendo o luto um processo que afeta o indivíduo em todas as dimensões que o definem, um paradigma educacional que encontre na complexidade a sua matriz identitária, foi assumido neste estudo como o único capaz de resgatar a importância da gestão equilibrada dos afetos no processo de ensinoaprendizagem. À luz deste paradigma que assume um princípio de totalidade, partindo da totalidade, ou seja, que promove o desenvolvimento do ser humano na sua multidimensionalidade, assumimos também que a missão suprema e última da educação é a construção do sujeito ético. E é nos marcos de uma educação integral, humanista e ética, de responsabilidade pelo Outro, que ganha contornos a figura do professor cuidador, como sendo alguém atento às necessidades emocionais dos seus alunos. Do ponto de vista metodológico, esta investigação desenvolveu-se de acordo com uma abordagem de natureza predominantemente qualitativa, interpretativa e complexa e o estudo realizado centrou-se em três fases, tendo a recolha de dados decorrido entre setembro de 2009 e setembro de 2012: (i) Estudo exploratório, dirigido aos Diretores das 61 escolas secundárias do Distrito do Porto, com base num questionário adaptado, com o objetivo de sustentar a importância e pertinência do estudo principal e recolher indicadores para o orientar; (ii) Estudo de Caso Coletivo, envolvendo três alunas adolescentes em luto por perda de pai. Na primeira parte, e com vista à caracterização do contexto, a escola frequentada pelas três alunas, procedeuse à análise documental, concretamente do Projeto Educativo, mas também ao inquérito por entrevista a vários agentes da comunidade educativa: o Diretor, a Psicóloga, a Coordenadora dos Diretores de Turma do ensino secundário e seis Diretores de Turma. Na segunda parte, procedeu-se a uma abordagem holística e aprofundada da complexidade inerente a cada caso, procurando dar “voz” à forma única como cada uma das alunas vivenciou, significou e enfrentou a sua experiência de luto, tendo-se recorrido ao inquérito por entrevista. De forma a cruzar perspetivas de vários informantes, e identificar significados transversais, complementares ou alternativos, foram também entrevistadas as Encarregadas de Educação e os Diretores de Turma das alunas; (iii) Caraterização das conceções de professores sobre a temática em análise e validação de propostas de intervenção a implementar nas escolas, mobilizando-se, assim, dimensões emergentes das Fases I e II da investigação, mas também do quadro teórico que sustentou o estudo. Para a recolha de dados, foi construído um questionário que se aplicou aos professores do 2º e 3º ciclos do ensino básico e ensino secundário do Agrupamento onde se centrou o estudo de caso desenvolvido na Fase II. Procurando uma leitura global dos resultados obtidos e corroborando o que é amplamente defendido na literatura da especialidade, o estudo demonstra o impacto negativo que o luto tem no desempenho escolar e revela que o padrão afetivo da ambiência escolar não é favorável à expressão emocional de alunos enlutados, prevalecendo uma atitude de evitação por parte dos colegas e professores, o que sugere a necessidade de serem trilhados novos caminhos, por um lado, ao nível da formação de professores e, por outro lado, através da implementação em contexto escolar de uma intervenção pedagógica que eduque para a vida, mas sem descurar os fenómenos de perda afetiva significativa e respetivas vivências de luto. Deste estudo sai também reforçada a responsabilidade acrescida dos Diretores de Turma na constituição de um ethos de suporte envolvendo alunos em luto, cabendo-lhes um importante papel na articulação entre os vários agentes da comunidade educativa. Em suma: para além da necessidade de repensar o perfil de competências dos professores, mais consentâneo com o paradigma de escola acolhedora, deste estudo ressalta também a premência de validar na prática propostas de intervenção articuladas, consistentes e, sobretudo, éticas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O século I, que desabrochou numa Idade de Ouro, não findaria sob o signo da boa Fortuna inaugurada pelo primeiro Princeps. O século de Augusto conheceria o seu fim! A Literatura não pôde furtar-se ao fatum de todo um Império e, depois de 69, juntamente com a Magna Vrbs, aguardava um tempo que fosse, finalmente, capaz de uma renovação. Para os anos oitenta do século I, prometiam os Flavianos e as suas consecuções uma nova Aurea Aetas… Porém, revelou-se impossível recuperar o passado: então, como nunca antes, os abastados demandavam a púrpura e a populaça clamava por panem et circenses. E a mudança definitiva dos tempos tinha na produção artística das suas maiores provas — a clientela condenara os autores ao abandono! Longe os círculos de Mecenas, apoiando Horácios e Virgílios que podiam abraçar em exclusivo a sua arte… Marcus Valerius Martialis foi não apenas um autor cuja existência se ressentiria dos constrangimentos que esta época reservou aos poetas, como o que faria da sua obra o mais fiel espelho do seu tempo. Aliás, não fora a sua obra e não se compreenderia cabalmente como foi possível a um escritor sobreviver a esses tempos e trazer à luz o seu trabalho — a uma luz muito especial, na verdade: Hic est quem legis ille, quem requiris, / toto notus in orbe Martialis (1.1.1-2)! Para cantar o novo Império e o seu quotidiano, onde conviviam, a um tempo, a grandeza e a torpeza, nada melhor que uma rude auena, jocosa e mordaz... O epigrama, não a epopeia, era a nova voz de Roma! E Marcial, elevando a sua auena, aplicou toda a sua mestria na celebração da sua Roma e dos Romanos seus concidadãos — hominem pagina nostra sapit (10.4.10). Teremos nós perdido um épico talentoso que se devotou e à sua arte a um género menor ou teremos ganho um cantor ímpar que viveu em perfeita harmonia com o seu tempo? Alcançando a imortalidade, reservada, antes, para os épicos, Marcial alcançou o seu objetivo: si […] / [...] fas est cineri me superesse meo (7.44.7- 8). E, no entanto, o feito singular de Marcial foi dar cumprimento às suas palavras — angusta cantare licet uidearis auena, / dum tua multorum uincat auena tubas. (8.3.21-22) —, escrevendo, sob a forma de epigramas, a primeira e, talvez, a única epopeia do quotidiano!

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A integração de serviços na perspetiva dos cidadãos e empresas e a necessidade de garantir algumas características da Administração Pública como a versatilidade e a competitividade colocam alguns constrangimentos na conceção das arquiteturas de integração de serviços. Para que seja possível integrar serviços de forma a que se garanta a mutabilidade da Administração Pública, é necessário criar dinamicamente workflows. No entanto, a criação de dinâmica de workflows suscita algumas preocupações ao nível da segurança, nomeadamente em relação à privacidade dos resultados produzidos durante a execução de um workflow e em relação à aplicação de políticas de controlo de participação no workflow pelos diversos executores do mesmo. Neste trabalho apresentamos um conjunto de princípios e regras (arquitetura) que permitem a criação e execução de workflows dinâmicos resolvendo, através de um modelo de segurança, as questões referidas. A arquitetura utiliza a composição de serviços para dessa forma construir serviços complexos a que poderá estar inerente um workflow dinâmico. A arquitetura usa ainda um paradigma de troca de mensagens-padrão entre os prestadores de serviços envolvidos num workflow dinâmico. O modelo de segurança proposto está intimamente ligado ao conjunto de mensagens definido na arquitetura. No âmbito do trabalho foram identificadas e analisadas várias arquiteturas e/ou plataformas de integração de serviços. A análise realizada teve como objetivo identificar as arquiteturas que permitem a criação de workflows dinâmicos e, destas, aquelas que utilizam mecanismos de privacidade para os resultados e de controlo de participação dos executores desses workflows. A arquitetura de integração que apresentamos é versátil, escalável, permite a prestação concorrente de serviços entre prestadores de serviços e permite criar workflows dinâmicos. A arquitetura permite que as entidades executoras do workflow decidam sobre a sua participação, decidam sobre a participação de terceiros (a quem delegam serviços) e decidam a quem entregam os resultados. Os participantes são acreditados por entidades certificadores reconhecidas pelos demais participantes. As credenciais fornecidas pelas entidades certificadoras são o ponto de partida para a aplicação de políticas de segurança no âmbito da arquitetura. Para validar a arquitetura proposta foram identificados vários casos de uso que exemplificam a necessidade de construção de workflows dinâmicos para atender a serviços complexos (não prestados na íntegra por uma única entidade). Estes casos de uso foram implementados num protótipo da arquitetura desenvolvido para o efeito. Essa experimentação permitiu concluir que a arquitetura está adequada para prestar esses serviços usando workflows dinâmicos e que na execução desses workflows os executores dispõem dos mecanismos de segurança adequados para controlar a sua participação, a participação de terceiros e a privacidade dos resultados produzidos no âmbito dos mesmos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho procura averiguar o impacte das doenças crónicas no ajustamento psicológico das crianças, tendo em conta diferentes tipos de doenças, as suas características e a perceção dos pais acerca das mesmas. Para além disso procura perceber a perceção dos pais e dos profissionais de saúde em relação à importância atribuída ao brincar em contexto hospitalar. A amostra é constituída por 176 crianças, dos 3 aos 10 anos, distribuídas por quatro grupos: crianças com asma, crianças com cancro, crianças com patologia uro-nefrológica e crianças sem doença. A recolha de dados teve lugar nas salas de espera de consulta externa de Pediatria do Hospital Infante D. Pedro e de Oncologia Médica do Hospital Pediátrico de Coimbra. Este estudo recorreu a metodologia quantitativa e qualitativa. Desta forma os instrumentos utilizados foram a Escala de Observação do Brincar (POS), alguns itens do Revised Illness Perception Questionnaire (IPQ-R), o Questionário de Capacidades e de Dificuldades (SDQ) e a entrevista semi-estruturada. O ajustamento psicológico foi avaliado através de questionários aplicados aos pais mas também através da observação direta do brincar da criança, colmatando assim uma das principais lacunas nesta área – o acesso a uma única fonte de informação e forma de avaliação. A análise dos resultados permitiu perceber que não existe uma relação linear entre o ajustamento psicológico das crianças e a presença de uma doença crónica e que a avaliação do ajustamento da criança através da observação direta do brincar nem sempre é coincidente com a perspetiva dos pais acerca desse ajustamento. Tanto os pais como os profissionais de saúde reconhecem ainda inúmeras vantagens na utilização do brincar em crianças com doença crónica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As comunicações quânticas aplicam as leis fundamentais da física quântica para codificar, transmitir, guardar e processar informação. A mais importante e bem-sucedida aplicação é a distribuição de chaves quânticas (QKD). Os sistemas de QKD são suportados por tecnologias capazes de processar fotões únicos. Nesta tese analisamos a geração, transmissão e deteção de fotões únicos e entrelaçados em fibras óticas. É proposta uma fonte de fotões única baseada no processo clássico de mistura de quatro ondas (FWM) em fibras óticas num regime de baixas potências. Implementamos essa fonte no laboratório, e desenvolvemos um modelo teórico capaz de descrever corretamente o processo de geração de fotões únicos. O modelo teórico considera o papel das nãolinearidades da fibra e os efeitos da polarização na geração de fotões através do processo de FWM. Analisamos a estatística da fonte de fotões baseada no processo clássico de FWM em fibras óticas. Derivamos um modelo teórico capaz de descrever a estatística dessa fonte de fotões. Mostramos que a estatística da fonte de fotões evolui de térmica num regime de baixas potências óticas, para Poissoniana num regime de potências óticas moderadas. Validamos experimentalmente o modelo teórico, através do uso de fotodetetores de avalanche, do método estimativo da máxima verossimilhança e do algoritmo de maximização de expectativa. Estudamos o processo espontâneo de FWM como uma fonte condicional de fotões únicos. Analisamos a estatística dessa fonte em termos da função condicional de coerência de segunda ordem, considerando o espalhamento de Raman na geração de pares de fotões, e a perda durante a propagação de fotões numa fibra ótica padrão. Identificamos regimes apropriados onde a fonte é quase ideal. Fontes de pares de fotões implementadas em fibras óticas fornecem uma solução prática ao problema de acoplamento que surge quando os pares de fotões são gerados fora da fibra. Exploramos a geração de pares de fotões através do processo espontâneo de FWM no interior de guias de onda com suceptibilidade elétrica de terceira ordem. Descrevemos a geração de pares de fotões em meios com elevado coeficiente de absorção, e identificamos regimes ótimos para o rácio contagens coincidentes/acidentais (CAR) e para a desigualdade de Clauser, Horne, Shimony, and Holt (CHSH), para o qual o compromisso entre perda do guia de onda e não-linearidades maximiza esses parâmetros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work reports studies on the new compounds obtained by the combination of polyoxoanions derived from the Keggin and Lindquist structures with several cations. The studies were first focused on the monolacunary Keggin polyoxoanions [PW11O39M(H2O)]n- (M = FeIII, MnIII and n = 4; M = CoII and n = 5) and its combination with the organic cation 1-butyl-3-methylimidazolium (Bmim+). The association of Bmim+ cation with the polyoxoanion [PW11O39Fe(H2O)]4- allowed to isolate for the first time both the monomeric and the dimeric [PW11O39Fe)2O]10- anions, with the same cation and using simple bench techniques by pH manipulation. Studies regarding the stability of these inorganic species in solution indicated that both species are present in solution in equilibrium. However, the inability to up until now isolate the dimeric unit through simple bench methods, lead to the hypothesis that the cation had a role to play in the selective precipitation of either the monomer or the dimer. Repetition of the same procedures with the polyoxoanions [SiW11O39Fe(H2O)]5- and [PW11O39M(H2O)]n- (M = FeIII, MnIII and n = 4; M = Co and n = 5), afforded only the corresponding monomeric compounds, (Bmim)5[SiW11O39FeIII(H2O)]· 4H2O (3), (Bmim)5[PW11O39CoII(H2O)]· 0.5 H2O, (4) and (Bmim)5[PW11O39MnIII(H2O)]· 0.5 H2O (5). Moreover, the combination of Bmim+ and the polyoxotungstate [PW11O39Co(H2O)]5- afforded two different crystal structures, depending on the synthetic conditions. Thus, a ratio Bmim+:POM of 5:1 and the presence of K+ cations (due to addition of KOH) led to a formula Na2K(Bmim)2[PW11.2O39Co0.8(H2O)]·7H2O (4a), whilst a ratio Bmim:POM of 7:1 led to the formation of a crystal with the chemical formula Na2(Bmim)8[PW11O39Co(H2O)]2·3H2O (4b). Electrochemical studies were performed with carbon paste electrodes modified with BmimCl to investigate the influence of the Bmim+ cation in the performance of the electrodes. The voltametric measurements obtained from solutions containing the anions [PW11O39]7- and [SiW11O39]8- are presented. Results pointed to an improvement of the acquired voltametric signal with a slight addition of BmimCl (up to 2.5% w/w), specially in the studies regarding pH variation. Additional synthesis were carried out with both the cations Omim+ and THTP+.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A domótica é uma área com grande interesse e margem de exploração, que pretende alcançar a gestão automática e autónoma de recursos habitacionais, proporcionando um maior conforto aos utilizadores. Para além disso, cada vez mais se procuram incluir benefícios económicos e ambientais neste conceito, por forma a garantir um futuro sustentável. O aquecimento de água (por meios elétricos) é um dos fatores que mais contribui para o consumo de energia total de uma residência. Neste enquadramento surge o tema “algoritmos inteligentes de baixa complexidade”, com origem numa parceria entre o Departamento de Eletrónica, Telecomunicações e Informática (DETI) da Universidade de Aveiro e a Bosch Termotecnologia SA, que visa o desenvolvimento de algoritmos ditos “inteligentes”, isto é, com alguma capacidade de aprendizagem e funcionamento autónomo. Os algoritmos devem ser adaptados a unidades de processamento de 8 bits para equipar pequenos aparelhos domésticos, mais propriamente tanques de aquecimento elétrico de água. Uma porção do desafio está, por isso, relacionada com as restrições computacionais de microcontroladores de 8 bits. No caso específico deste trabalho, foi determinada a existência de sensores de temperatura da água no tanque como a única fonte de informação externa aos algoritmos, juntamente com parâmetros pré-definidos pelo utilizador que estabelecem os limiares de temperatura máxima e mínima da água. Partindo deste princípio, os algoritmos desenvolvidos baseiam-se no perfil de consumo de água quente, observado ao longo de cada semana, para tentar prever futuras tiragens de água e, consequentemente, agir de forma adequada, adiantando ou adiando o aquecimento da água do tanque. O objetivo é alcançar uma gestão vantajosa entre a economia de energia e o conforto do utilizador (água quente), isto sem que exista necessidade de intervenção direta por parte do utilizador final. A solução prevista inclui também o desenvolvimento de um simulador que permite observar, avaliar e comparar o desempenho dos algoritmos desenvolvidos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanisms of secretory granule biogenesis and regulated secretion of digestive enzymes in pancreatic acinar cells are still not well understood. To shed light on these processes, which are of biological and clinical importance (e.g., pancreatitis), a better molecular understanding of the components of the granule membrane, their functions and interactions is required. The application of proteomics has largely contributed to the identification of novel zymogen granule (ZG) proteins but was not yet accompanied by a better characterization of their functions. In this study we aimed at a) isolation and identification of novel membrane-associated ZG proteins; b) characterization of the biochemical properties and function of the secretory lectin ZG16p, a membrane-associated protein; c) exploring the potential of ZG16p as a new tool to label the endolysosomal compartment. First, we have performed a suborganellar proteomics approach by combining protein analysis by 2D-PAGE and identification by mass spectrometry, which has led to the identification of novel peripheral ZGM proteins with proteoglycan-binding properties (e.g., chymase, PpiB). Then, we have unveiled new molecular properties and (multiple) functions of the secretory lectin ZG16p. ZG16p is a unique mammalian lectin with glycan and proteoglycan binding properties. Here, I revealed for the first time that ZG16p is highly protease resistant by developing an enterokinase-digestion assay. In addition I revealed that ZG16p binds to a high molecular weight complex at the ZGM (which is also protease resistant) and forms highly stable dimers. In light of these findings I suggest that ZG16p is a key component of a predicted submembranous granule matrix attached to the luminal side of the ZGM that fulfils important functions during sorting and packaging of zymogens. ZG16p, may act as a linker between the matrix and aggregated zymogens due to dimer formation. Furthermore, ZG16p protease resistance might be of higher importance after secretion since it is known that ZG16p binds to pathogenic fungi in the gut. I have further investigated the role of ZG16p binding motifs in its targeting to ZG in AR42J cells, a pancreatic model system. Point mutations of the glycan and the proteoglycan binding motifs did not inhibit the targeting of ZG16p to ZG in AR42J cells. I have also demonstrated that when ZG16p is present in the cytoplasm it interacts with and modulates the endo-lysosomal compartment. Since it is known that impaired autophagy due to lysosomal malfunction is involved in the course of pancreatitis, a potential role of ZG16p in pancreatitis is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forest fires implications in overland flow and soil erosion have been researched for several years. Therefore, is widely known that fires enhance hydrological and geomorphological activity worldwide as also in Mediterranean areas. Soil burn severity has been widely used to describe the impacts of fire on soils, and has being recognized as a decisive factor controlling post-fire erosion rates. However, there is no unique definition of the term and the relationship between soil burn severity and post-fire hydrological and erosion response has not yet been fully established. Few studies have assessed post-fire erosion over multiple years, and the authors are aware of none which assess runoff. Small amount of studies concerning pre-fire management practices were also found. In the case of soil erosion models, the Revised Universal Soil Loss Equation (RUSLE) and the revised Morgan–Morgan–Finney (MMF) are well-known models, but not much information is available as regards their suitability in predicting post-fire soil erosion in forest soils. The lack of information is even more pronounced as regards post-fire rehabilitation treatments. The aim of the thesis was to perform an extensive research under the post fire hydrologic and erosive response subject. By understanding the effect of burn severity in ecosystems and its implications regarding post fire hydrological and erosive responses worldwide. Test the effect of different pre-fire land management practices (unplowed, downslope plowed and contour plowed) and time-since-fire, in the post fire hydrological and erosive response, between the two most common land uses in Portugal (pine and eucalypt). Assess the performance of two widely-known erosion models (RUSLE and Revised MMF), to predict soil erosion rates during first year following two wildfires of distinctive burn severity. Furthermore, to apply these two models considering different post-fire rehabilitation treatments in an area severely affected by fire. Improve model estimations of post-fire runoff and erosion rates in two different land uses (pine and eucalypt) using the revised MMF. To assess these improvements by comparing estimations and measurements of runoff and erosion, in two recently burned sites, as also with their post fire rehabilitation treatments. Model modifications involved: (1) focusing on intra-annual changes in parameters to incorporate seasonal differences in runoff and erosion; and (2) inclusion of soil water repellency in runoff predictions. Additionally, validate these improvements with the application of the model to other pine and eucalypt sites in Central Portugal. The review and meta-analysis showed that fire occurrence had a significant effect on the hydrological and erosive response. However, this effect was only significantly higher with increasing soil burn severity for inter-rill erosion, and not for runoff. This study furthermore highlighted the incoherencies between existing burn severity classifications, and proposed an unambiguous classification. In the case of the erosion plots with natural rainfall, land use factor affected annual runoff while land management affected both annual runoff and erosion amounts significantly. Time-since-fire had an important effect in erosion amounts among unplowed sites, while for eucalypt sites time affected both annual runoff and erosion amounts. At all studied sites runoff coefficients increase over the four years of monitoring. In the other hand, sediment concentration in the runoff, recorded a decrease during the same period. Reasons for divergence from the classic post-fire recovery model were also explored. Short fire recurrence intervals and forest management practices are viewed as the main reasons for the observed severe and continuing soil degradation. The revised MMF model presented reasonable accuracy in the predictions while the RUSLE clearly overestimated the observed erosion rates. After improvements: the revised model was able to predict first-year post-fire plot-scale runoff and erosion rates for both forest types, these predictions were improved both by the seasonal changes in the model parameters; and by considering the effect of soil water repellency on the runoff, individual seasonal predictions were considered accurate, and the inclusion of the soil water repellency in the model also improved the model at this base. The revised MMF model proved capable of providing a simple set of criteria for management decisions about runoff and erosion mitigation measures in burned areas. The erosion predictions at the validation sites attested both to the robustness of the model and of the calibration parameters, suggesting a potential wider application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to the World Health Organization, around 8.2 million people die each year with cancer. Most patients do not perform routine diagnoses and the symptoms, in most situations, occur when the patient is already at an advanced stage of the disease, consequently resulting in a high cancer mortality. Currently, prostate cancer is the second leading cause of death among males worldwide. In Portugal, this is the most diagnosed type of cancer and the third that causes more deaths. Taking into account that there is no cure for advanced stages of prostate cancer, the main strategy comprises an early diagnosis to increase the successful rate of the treatment. The prostate specific antigen (PSA) is an important biomarker of prostate cancer that can be detected in biological fluids, including blood, urine and semen. However, the commercial kits available are addressed for blood samples and the commonly used analytical methods for their detection and quantification requires specialized staff, specific equipment and extensive sample processing, resulting in an expensive process. Thus, the aim of this MSc thesis consisted on the development of a simple, efficient and less expensive method for the extraction and concentration of PSA from urine samples using aqueous biphasic systems (ABS) composed of ionic liquids. Initially, the phase diagrams of a set of aqueous biphasic systems composed of an organic salt and ionic liquids were determined. Then, their ability to extract PSA was ascertained. The obtained results reveal that in the tested systems the prostate specific antigen is completely extracted to the ionic-liquid-rich phase in a single step. Subsequently, the applicability of the investigated ABS for the concentration of PSA was addressed, either from aqueous solutions or urine samples. The low concentration of this biomarker in urine (clinically significant below 150 ng/mL) usually hinders its detection by conventional analytical techniques. The obtained results showed that it is possible to extract and concentrate PSA, up to 250 times in a single-step, so that it can be identified and quantified using less expensive techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to the World Health Organization, around 8.2 million people die each year with cancer. Most patients do not perform routine diagnoses and the symptoms, in most situations, occur when the patient is already at an advanced stage of the disease, consequently resulting in a high cancer mortality. Currently, prostate cancer is the second leading cause of death among males worldwide. In Portugal, this is the most diagnosed type of cancer and the third that causes more deaths. Taking into account that there is no cure for advanced stages of prostate cancer, the main strategy comprises an early diagnosis to increase the successful rate of the treatment. The prostate specific antigen (PSA) is an important biomarker of prostate cancer that can be detected in biological fluids, including blood, urine and semen. However, the commercial kits available are addressed for blood samples and the commonly used analytical methods for their detection and quantification requires specialized staff, specific equipment and extensive sample processing, resulting in an expensive process. Thus, the aim of this MSc thesis consisted on the development of a simple, efficient and less expensive method for the extraction and concentration of PSA from urine samples using aqueous biphasic systems (ABS) composed of ionic liquids. Initially, the phase diagrams of a set of aqueous biphasic systems composed of an organic salt and ionic liquids were determined. Then, their ability to extract PSA was ascertained. The obtained results reveal that in the tested systems the prostate specific antigen is completely extracted to the ionic-liquid-rich phase in a single step. Subsequently, the applicability of the investigated ABS for the concentration of PSA was addressed, either from aqueous solutions or urine samples. The low concentration of this biomarker in urine (clinically significant below 150 ng/mL) usually hinders its detection by conventional analytical techniques. The obtained results showed that it is possible to extract and concentrate PSA, up to 250 times in a single-step, so that it can be identified and quantified using less expensive techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of the present work is the study of a profitable process not only in the extraction and selective separation of lycopene and β-carotene, two compounds present in tomato, but also in its potential application to food industry wastes. This is one of the industries that produce larger amounts of wastes, which are rich in high value biomolecules with great economic interest. However, the conventional methods used to extract this kind of compounds are expensive which limits their application at large scale. Lycopene and βcarotene are carotenoids with high commercial value, known for their antioxidant activity and benefits to human health. Their biggest source is tomato, one of the world’s most consumed fruits, reason for which large quantities of waste is produced. This work focuses on the study of diverse solvents with a high potential to extract carotenoids from tomato, as well as the search for more environmentally benign solvents than those currently used to extract lycopene and β-carotene from biomass. Additionally, special attention was paid to the creation of a continuous process that would allow the fractionation of the compounds for further purification. Thus, the present work started with the extraction of both carotenoids using a wide range of solvents, namely, organic solvents, conventional salts, ionic liquids, polymers and surfactants. In this stage, each solvent was evaluated in what regards their capacity of extraction as well as their penetration ability in biomass. The results collected showed that an adequate selection of the solvents may lead to the complete extraction of both carotenoids in one single step, particularly acetone and tetrahydrofuran were the most effective ones. However, the general low penetration capacity of salts, ionic liquids, polymers and surfactants makes these solvents ineffective in the solid-liquid extraction process. As the organic solvents showed the highest capacity to extract lycopene and βcarotene, in particular tetrahydrofuran and acetone, the latter solvent used in the development process of fractionation, using to this by strategic use of solvents. This step was only successfully developed through the manipulation of the solubility of each compound in ethanol and n-hexane. The results confirmed the possibility of fractionating the target compounds using the correct addition order of the solvents. Approximately, 39 % of the β-carotene was dissolved in ethanol and about 64 % of lycopene was dissolved in n-hexane, thus indicating their separation for two different solvents which shows the selective character of the developed process without any prior stage optimization. This study revealed that the use of organic solvents leads to selective extraction of lycopene and β-carotene, allowing diminishing the numerous stages involved in conventional methods. At the end, it was possible to idealize a sustainable and of high industrial relevance integrated process, nevertheless existing the need for additional optimization studies in the future.