54 resultados para sentencias interlocutorias simples
Resumo:
Estabelecemos uma condição suficiente para a preservação dos produtos finitos, pelo reflector de uma variedade de álgebras universais numa subvariedade, que é, também, condição necessária se a subvariedade for idempotente. Esta condição é estabelecida, seguidamente, num contexto mais geral e caracteriza reflexões para as quais a propriedade de ser semi-exacta à esquerda e a propriedade, mais forte, de ter unidades estáveis, coincidem. Prova-se que reflexões simples e semi-exactas à esquerda coincidem, no contexto das variedades de álgebras universais e caracterizam-se as classes do sistema de factorização derivado da reflexão. Estabelecem-se resultados que ajudam a caracterizar morfismos de cobertura e verticais-estáveis em álgebras universais e no contexto mais geral já referido. Caracterizam-se as classes de morfismos separáveis, puramente inseparáveis e normais. O estudo dos morfismos de descida de Galois conduz a condições suficientes para que o seu par kernel seja preservado pelo reflector.
Resumo:
Vitis vinifera L., the most widely cultivated fruit crop in the world, was the starting point for the development of this PhD thesis. This subject was exploited following on two actual trends: i) the development of rapid, simple, and high sensitive methodologies with minimal sample handling; and ii) the valuation of natural products as a source of compounds with potential health benefits. The target group of compounds under study were the volatile terpenoids (mono and sesquiterpenoids) and C13 norisoprenoids, since they may present biological impact, either from the sensorial point of view, as regards to the wine aroma, or by the beneficial properties for the human health. Two novel methodologies for quantification of C13 norisoprenoids in wines were developed. The first methodology, a rapid method, was based on the headspace solid-phase microextraction combined with gas chromatography-quadrupole mass spectrometry operating at selected ion monitoring mode (HS-SPME/GC-qMS-SIM), using GC conditions that allowed obtaining a C13 norisoprenoid volatile signature. It does not require any pre-treatment of the sample, and the C13 norisoprenoid composition of the wine was evaluated based on the chromatographic profile and specific m/z fragments, without complete chromatographic separation of its components. The second methodology, used as reference method, was based on the HS-SPME/GC-qMS-SIM, allowing the GC conditions for an adequate chromatographic resolution of wine components. For quantification purposes, external calibration curves were constructed with β-ionone, with regression coefficient (r2) of 0.9968 (RSD 12.51 %) and 0.9940 (RSD of 1.08 %) for the rapid method and for the reference method, respectively. Low detection limits (1.57 and 1.10 μg L-1) were observed. These methodologies were applied to seventeen white and red table wines. Two vitispirane isomers (158-1529 L-1) and 1,1,6-trimethyl-1,2-dihydronaphthalene (TDN) (6.42-39.45 μg L-1) were quantified. The data obtained for vitispirane isomers and TDN using the two methods were highly correlated (r2 of 0.9756 and 0.9630, respectively). A rapid methodology for the establishment of the varietal volatile profile of Vitis vinifera L. cv. 'Fernão-Pires' (FP) white wines by headspace solid-phase microextraction combined with comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry (HS-SPME/GCxGC-TOFMS) was developed. Monovarietal wines from different harvests, Appellations, and producers were analysed. The study was focused on the volatiles that seem to be significant to the varietal character, such as mono and sesquiterpenic compounds, and C13 norisoprenoids. Two-dimensional chromatographic spaces containing the varietal compounds using the m/z fragments 93, 121, 161, 175 and 204 were established as follows: 1tR = 255-575 s, 2tR = 0,424-1,840 s, for monoterpenoids, 1tR = 555-685 s, 2tR = 0,528-0,856 s, for C13 norisoprenoids, and 1tR = 695-950 s, 2tR = 0,520-0,960 s, for sesquiterpenic compounds. For the three chemical groups under study, from a total of 170 compounds, 45 were determined in all wines, allowing defining the "varietal volatile profile" of FP wine. Among these compounds, 15 were detected for the first time in FP wines. This study proposes a HS-SPME/GCxGC-TOFMS based methodology combined with classification-reference sample to be used for rapid assessment of varietal volatile profile of wines. This approach is very useful to eliminate the majority of the non-terpenic and non-C13 norisoprenic compounds, allowing the definition of a two-dimensional chromatographic space containing these compounds, simplifying the data compared to the original data, and reducing the time of analysis. The presence of sesquiterpenic compounds in Vitis vinifera L. related products, to which are assigned several biological properties, prompted us to investigate the antioxidant, antiproliferative and hepatoprotective activities of some sesquiterpenic compounds. Firstly, the antiradical capacity of trans,trans-farnesol, cis-nerolidol, α-humulene and guaiazulene was evaluated using chemical (DPPH• and hydroxyl radicals) and biological (Caco-2 cells) models. Guaiazulene (IC50= 0.73 mM) was the sesquiterpene with higher scavenger capacity against DPPH•, while trans,trans-farnesol (IC50= 1.81 mM) and cis-nerolidol (IC50= 1.48 mM) were more active towards hydroxyl radicals. All compounds, with the exception of α-humulene, at non-cytotoxic levels (≤ 1 mM), were able to protect Caco-2 cells from oxidative stress induced by tert-butyl hydroperoxide. The activity of the compounds under study was also evaluated as antiproliferative agents. Guaiazulene and cis-nerolidol were able to more effectively arrest the cell cycle in the S-phase than trans,trans-farnesol and α-humulene, being the last almost inactive. The relative hepatoprotection effect of fifteen sesquiterpenic compounds, presenting different chemical structures and commonly found in plants and plant-derived foods and beverages, was assessed. Endogenous lipid peroxidation and induced lipid peroxidation with tert-butyl hydroperoxide were evaluated in liver homogenates from Wistar rats. With the exception of α-humulene, all the sesquiterpenic compounds under study (1 mM) were effective in reducing the malonaldehyde levels in both endogenous and induced lipid peroxidation up to 35% and 70%, respectively. The developed 3D-QSAR models, relating the hepatoprotection activity with molecular properties, showed good fit (R2LOO > 0.819) with good prediction power (Q2 > 0.950 and SDEP < 2%) for both models. A network of effects associated with structural and chemical features of sesquiterpenic compounds such as shape, branching, symmetry, and presence of electronegative fragments, can modulate the hepatoprotective activity observed for these compounds. In conclusion, this study allowed the development of rapid and in-depth methods for the assessment of varietal volatile compounds that might have a positive impact on sensorial and health attributes related to Vitis vinifera L. These approaches can be extended to the analysis of other related food matrices, including grapes and musts, among others. In addition, the results of in vitro assays open a perspective for the promising use of the sesquiterpenic compounds, with similar chemical structures such as those studied in the present work, as antioxidants, hepatoprotective and antiproliferative agents, which meets the current challenges related to diseases of modern civilization.
Resumo:
The incidence of cardiovascular diseases (CVD) has been increasing according to the European and global statistics. Thus, the development of new analytical devices, such as biosensors for assessing the risk of CVD could become a valuable approach for the improvement of healthcare service. In latest years, the nanotechnology has provided new materials with improved electronic properties which have an important contribution in the transduction mechanism of biosensors. Thus, in this thesis, biosensors based on field effect transistors with single-walled carbon nanotubes (NTFET) were developed for the detection of C-reactive protein (CRP) in clinical samples, that is, blood serum and saliva from a group of control patients and a group of CVD risk patients. CRP is an acute-phase protein, which is commonly known as the best validated biomarker for the assessment of CVD, the single-walled carbon nanotubes (SWCNT) were applied as transduction components, and the immunoreaction (interaction between the CRP antigen and the antibodies specific to CRP) was used as the mechanism of molecular recognition for the label-free detection of CRP. After the microfabrication of field effect transistors (FET), the screening of the most important variables for the dispersion of SWCNT, the assemblage of NTFET, and their application on standard solutions of CRP, it was found that NTFET respond accurately to CRP both in saliva and in serum samples, since similar CRP levels were found with the NTFET and the traditional methodology (ELISA technique). On the other hand, a strong correlation between salivary and serum CRP was found with NTFET, which means that saliva could be used, based on non-invasive sampling, as an alternative fluid to blood serum. It was also shown that NTFET could discriminate control patients from CVD risk patients, allowing the determination of a cut-off value for salivary CRP of 1900 ng L-1, which corresponds to the well established cut-off of 3 mg L-1 for CRP in serum, constituting an important finding for the possible establishment of a new range of CRP levels based on saliva. According to the data provided from the volunteer patients regarding their lipoprotein profile and lifestyle factors, it was concluded that the control and the CVD risk patients could be separated taking into account the various risk factors established in literature as strong contributors for developing a CVD, such as triglycerides, serum CRP, total cholesterol, LDL cholesterol, body mass index, Framingham risk score, hypertension, dyslipidemia, and diabetes mellitus. Thus, this work could provide an additional contribution to the understanding of the association of biomarkers levels in serum and saliva samples, and above all, cost-effective, rapid, label-free, and disposable NTFET were developed, based on a noninvasive sampling, for the assessment of CVD risk, thus constituting a potential point-of-care technology.
Resumo:
Os coeficientes de difusão (D 12) são propriedades fundamentais na investigação e na indústria, mas a falta de dados experimentais e a inexistência de equações que os estimem com precisão e confiança em fases comprimidas ou condensadas constituem limitações importantes. Os objetivos principais deste trabalho compreendem: i) a compilação de uma grande base de dados para valores de D 12 de sistemas gasosos, líquidos e supercríticos; ii) o desenvolvimento e validação de novos modelos de coeficientes de difusão a diluição infinita, aplicáveis em amplas gamas de temperatura e densidade, para sistemas contendo componentes muito distintos em termos de polaridade, tamanho e simetria; iii) a montagem e teste de uma instalação experimental para medir coeficientes de difusão em líquidos e fluidos supercríticos. Relativamente à modelação, uma nova expressão para coeficientes de difusão a diluição infinita de esferas rígidas foi desenvolvida e validada usando dados de dinâmica molecular (desvio relativo absoluto médio, AARD = 4.44%) Foram também estudados os coeficientes de difusão binários de sistemas reais. Para tal, foi compilada uma extensa base de dados de difusividades de sistemas reais em gases e solventes densos (622 sistemas binários num total de 9407 pontos experimentais e 358 moléculas) e a mesma foi usada na validação dos novos modelos desenvolvidos nesta tese. Um conjunto de novos modelos foi proposto para o cálculo de coeficientes de difusão a diluição infinita usando diferentes abordagens: i) dois modelos de base molecular com um parâmetro específico para cada sistema, aplicáveis em sistemas gasosos, líquidos e supercríticos, em que natureza do solvente se encontra limitada a apolar ou fracamente polar (AARDs globais na gama 4.26-4.40%); ii) dois modelos de base molecular biparamétricos, aplicáveis em todos os estados físicos, para qualquer tipo de soluto diluído em qualquer solvente (apolar, fracamente polar e polar). Ambos os modelos dão origem a erros globais entre 2.74% e 3.65%; iii) uma correlação com um parâmetro, específica para coeficientes de difusão em dióxido de carbono supercrítico (SC-CO2) e água líquida (AARD = 3.56%); iv) nove correlações empíricas e semi-empíricas que envolvem dois parâmetros, dependentes apenas da temperatura e/ou densidade do solvente e/ou viscosidade do solvente. Estes últimos modelos são muito simples e exibem excelentes resultados (AARDs entre 2.78% e 4.44%) em sistemas líquidos e supercríticos; e v) duas equações preditivas para difusividades de solutos em SC-CO2, em que os erros globais de ambas são inferiores a 6.80%. No global, deve realçar-se o facto de os novos modelos abrangerem a grande variedade de sistemas e moléculas geralmente encontrados. Os resultados obtidos são consistentemente melhores do que os obtidos com os modelos e abordagens encontrados na literatura. No caso das correlações com um ou dois parâmetros, mostrou-se que estes mesmos parâmetros podem ser ajustados usando um conjunto muito pequeno de dados, e posteriormente serem utilizados na previsão de valores de D 12 longe do conjunto original de pontos. Uma nova instalação experimental para medir coeficientes de difusão binários por técnicas cromatográficas foi montada e testada. O equipamento, o procedimento experimental e os cálculos analíticos necessários à obtenção dos valores de D 12 pelo método de abertura do pico cromatográfico, foram avaliados através da medição de difusividades de tolueno e acetona em SC-CO2. Seguidamente, foram medidos coeficientes de difusão de eucaliptol em SC-CO2 nas gamas de 202 – 252 bar e 313.15 – 333.15 K. Os resultados experimentais foram analisados através de correlações e modelos preditivos para D12.
Resumo:
This investigation focused on the development, test and validation of methodologies for mercury fractionation and speciation in soil and sediment. After an exhaustive review of the literature, several methods were chosen and tested in well characterised soil and sediment samples. Sequential extraction procedures that divide mercury fractions according to their mobility and potential availability in the environment were investigated. The efficiency of different solvents for fractionation of mercury was evaluated, as well as the adequacy of different analytical instruments for quantification of mercury in the extracts. Kinetic experiments to establish the equilibrium time for mercury release from soil or sediment were also performed. It was found that in the studied areas, only a very small percentage of mercury is present as mobile species and that mobility is associated to higher aluminium and manganese contents, and that high contents of organic matter and sulfur result in mercury tightly bound to the matrix. Sandy soils tend to release mercury faster that clayey soils, and therefore, texture of soil or sediment has a strong influence on the mobility of mercury. It was also understood that analytical techniques for quantification of mercury need to be further developed, with lower quantification limits, particularly for mercury quantification of less concentrated fractions: water-soluble e exchangeable. Although the results provided a better understanding of the distribution of mercury in the sample, the complexity of the procedure limits its applicability and robustness. A proficiency-testing scheme targeting total mercury determination in soil, sediment, fish and human hair was organised in order to evaluate the consistency of results obtained by different laboratories, applying their routine methods to the same test samples. Additionally, single extractions by 1 mol L-1 ammonium acetate solution, 0.1 mol L-1 HCl and 0.1 mol L-1 CaCl2, as well as extraction of the organometallic fraction were proposed for soil; the last was also suggested for sediment and fish. This study was important to update the knowledge on analytical techniques that are being used for mercury quantification, the associated problems and sources of error, and to improve and standardize mercury extraction techniques, as well as to implement effective strategies for quality control in mercury determination. A different, “non chemical-like” method for mercury species identification was developed, optimised and validated, based on the thermo-desorption of the different mercury species. Compared to conventional extraction procedures, this method has advantages: it requires little to no sample treatment; a complete identification of species present is obtained in less than two hours; mercury losses are almost neglectable; can be considered “clean”, as no residues are produced; the worldwide comparison of results obtained is easier and reliable, an important step towards the validation of the method. Therefore, the main deliverables of this PhD thesis are an improved knowledge on analytical procedures for identification and quantification of mercury species in soils and sediments, as well as a better understanding of the factors controlling the behaviour of mercury in these matrices.
Resumo:
For the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.
Resumo:
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
Resumo:
The present work reports studies on the new compounds obtained by the combination of polyoxoanions derived from the Keggin and Lindquist structures with several cations. The studies were first focused on the monolacunary Keggin polyoxoanions [PW11O39M(H2O)]n- (M = FeIII, MnIII and n = 4; M = CoII and n = 5) and its combination with the organic cation 1-butyl-3-methylimidazolium (Bmim+). The association of Bmim+ cation with the polyoxoanion [PW11O39Fe(H2O)]4- allowed to isolate for the first time both the monomeric and the dimeric [PW11O39Fe)2O]10- anions, with the same cation and using simple bench techniques by pH manipulation. Studies regarding the stability of these inorganic species in solution indicated that both species are present in solution in equilibrium. However, the inability to up until now isolate the dimeric unit through simple bench methods, lead to the hypothesis that the cation had a role to play in the selective precipitation of either the monomer or the dimer. Repetition of the same procedures with the polyoxoanions [SiW11O39Fe(H2O)]5- and [PW11O39M(H2O)]n- (M = FeIII, MnIII and n = 4; M = Co and n = 5), afforded only the corresponding monomeric compounds, (Bmim)5[SiW11O39FeIII(H2O)]· 4H2O (3), (Bmim)5[PW11O39CoII(H2O)]· 0.5 H2O, (4) and (Bmim)5[PW11O39MnIII(H2O)]· 0.5 H2O (5). Moreover, the combination of Bmim+ and the polyoxotungstate [PW11O39Co(H2O)]5- afforded two different crystal structures, depending on the synthetic conditions. Thus, a ratio Bmim+:POM of 5:1 and the presence of K+ cations (due to addition of KOH) led to a formula Na2K(Bmim)2[PW11.2O39Co0.8(H2O)]·7H2O (4a), whilst a ratio Bmim:POM of 7:1 led to the formation of a crystal with the chemical formula Na2(Bmim)8[PW11O39Co(H2O)]2·3H2O (4b). Electrochemical studies were performed with carbon paste electrodes modified with BmimCl to investigate the influence of the Bmim+ cation in the performance of the electrodes. The voltametric measurements obtained from solutions containing the anions [PW11O39]7- and [SiW11O39]8- are presented. Results pointed to an improvement of the acquired voltametric signal with a slight addition of BmimCl (up to 2.5% w/w), specially in the studies regarding pH variation. Additional synthesis were carried out with both the cations Omim+ and THTP+.
Resumo:
This dissertation describes the synthesis and characterization of different phthalocyanine (Pc) derivatives, as well as some porphyrins (Pors), for supramolecular interaction with different carbon nanostructures, to evaluate their potential application in electronic nanodevices. Likewise, it is also reported the preparation and biological evaluation of interesting phthalocyanine conjugates for cancer photodynamic therapy (PDT) and microorganisms photodynamic inactivation (PDI). The phthalonitrile precursors were prepared from commercial phthalonitriles by nucleophilic substitution of -NO2, -Cl, or -F groups, present in the phthalonitrile core, by thiol or pyridyl units. After the synthesis of these phthalonitriles, the corresponding Pcs were prepared by ciclotetramerization using a metallic salt as template at high temperatures. A second strategy involved the postfunctionalization of hexadecafluorophthalocyaninato zinc(II) through the adequate substituents of mercaptopyridine or cyclodextrin units on the macrocycle periphery. The different compounds were structurally characterized by diverse spectroscopic techniques, namely 1H, 13C and 19F nuclear magnetic resonance spectroscopies (attending the elemental composition of each structure); absorption and emission spectroscopy, and mass spectrometry. For the specific photophysical studies were also used electrochemical characterization, femtosecond and raman spectroscopy, transmission electron and atomic force microscopy. It was highlighted the noncovalent derivatisation of carbon nanostructures, mainly single wall carbon nanotubes (SWNT) and graphene nanosheets with the prepared Pc conjugates to study the photophysical properties of these supramolecular nanoassemblies. Also, from pyridyl-Pors and ruthenium phthalocyanines (RuPcs) were performed Por-RuPcs arrays via coordination chemistry. The results obtained of the novel supramolecular assemblies showed interesting electron donor-acceptor interactions and might be considered attractive candidates for nanotechnological devices. On the other hand, the amphiphilic phthalocyanine-cyclodextrin (Pc-CD) conjugates were tested in biological trials to assess their ability to inhibit UMUC- 3 human bladder cancer cells. The results obtained demonstrated that these photoactive conjugates are highly phototoxic against human bladder cancer cells and could be applied as promising PDT drugs.
Resumo:
Os primeiros estudos onde se tentava avaliar os melhores horários para se lecionar de forma a se poderem otimizar os horários escolares são já muito antigos. O primeiro a estabelecer uma relação sistemática entre performance cognitiva, Cronobiologia e sono foi Kleitman, evidenciando uma paralelismo entre o ritmo circadiano da temperatura central e a altura do dia em que eram realizadas tarefas simples de repetição. Após este primeiro estudo, muitos outros se seguiram, contudo a maioria apenas encontrou ritmos em protocolos de rotina constante e dessincronização forçada desprovidos de validade ecológica. Acresce ainda o facto de neste tipo de estudos não haver uma manipulação sistemática do efeito do padrão individual de distribuição dos parâmetros circadianos no nictómero, designado na literatura como Cronotipo. Perante isto, o presente estudo pretende avaliar a influência do Cronotipo nos ritmos cognitivos, utilizando um protocolo de rotina normal (Ecológico), onde também se manipula o efeito fim-de-semana. Para testar as premissas supramencionadas, utilizou-se uma amostra de 16 alunos universitários, que numa primeira fase responderam ao questionário de Matutinidade e Vespertinidade de Horne&Östberg, para caracterização do Cronotipo, e posteriormente andaram 15-17 dias consecutivos com tempatilumis (actímetros) para análise de ritmos de temperatura e atividade, com iPads onde realizavam ao longo do dia várias tarefas cognitivas e com o Manual de Registo Diário, onde respondiam ao diário de sono e de atividade. A análise de dados denotou a inexistência de expressão de ritmos na maioria dos parâmetros cognitivos inviabilizando a verificação de diferenças significativas entre indivíduos matutinos e vespertinos nestes parâmetros. Esta ausência de visualização da expressão rítmica pode ser explicada pelo facto de os participantes não terem aderido da forma desejada e exigida, à realização das tarefas cognitivas, ou pelo facto de termos usado um protocolo de rotina normal, em detrimento dos protocolos de rotina constate e dessincronização forçada, não controlando assim algumas variáveis que influenciam o desempenho cognitivo, podendo estas mascarar ou mesmo eliminar o ritmo. Ainda assim e apesar destas contingências observaram-se ritmos circadianos nas variáveis de autoavaliação, mesmo com o paradigma ecológico. Verificou-se ainda um efeito da hora do dia em vários parâmetros de tarefas cognitivas e motoras medidas objetivamente, assim como uma diminuição da performance cognitiva nos vespertinos, comparativamente aos matutinos, na janela temporal das 6h às 12 horas, que coincide com a maior concentração de horas de aulas por dia na Universidade onde o estudo foi realizado. Outros estudos serão necessários para consolidar a influência do Cronotipo nos ritmos cognitivos, utilizando o protocolo de rotina normal para garantir a validade ecológica, salvaguardando uma participação mais ativa na execução das tarefas cognitivas por parte dos sujeitos em estudo.
Resumo:
Slender masonry structures are distributed all over the world and constitute a relevant part of the architectural and cultural heritage of humanity. Their protection against earthquakes is a topic of great concern among the scientific community. This concern mainly arises from the strong damage or complete loss suffered by this group of structures due to catastrophic events and the need and interest to preserve them. Although the great progress in technology, and in the knowledge of seismology and earthquake engineering, the preservation of these brittle and massive structures still represents a major challenge. Based on the research developed in this work it is proposed a methodology for the seismic risk assessment of slender masonry structures. The proposed methodology was applied for the vulnerability assessment of Nepalese Pagoda temples which follow very simple construction procedure and construction detailing in relation to seismic resistance requirements. The work is divided in three main parts. Firstly, particular structural fragilities and building characteristics of the important UNESCO classified Nepalese Pagoda temples which affect their seismic performance and dynamic properties are discussed. In the second part the simplified method proposed for seismic vulnerability assessment of slender masonry structures is presented. Finally, the methodology proposed in this work is applied to study Nepalese Pagoda temples, as well as in the efficiency assessment of seismic performance improvement solution compatible with original cultural and technological value.
Resumo:
De uma forma simples, esta é uma tese que associa a dimensão territorial à formulação de políticas públicas no âmbito dos Serviços de Interesse Geral, expressão atualmente utilizada no seio da Comissão Europeia em substituição do termo Serviços Públicos. O ponto de partida é o de que, particularmente nas últimas duas décadas, estes serviços tiveram de se adaptar a um mundo em mudança, quer ao nível das tendências políticas, quer do ponto de vista dos constrangimentos financeiros. A decisão sobre a afetação e distribuição de recursos tem, por isso, obtido uma atenção crescente no domínio das políticas públicas. Contudo, as decisões sobre a natureza, a abrangência e a distribuição dos recursos a prestar são complexas, envolvendo, não só critérios técnicos, mas também julgamentos de valor e a criação de consensos políticos. Esta questão é ainda mais premente numa conjuntura, por um lado, de contenção de gastos, no qual a procura de eficiência ganha maior preponderância, e, por outro, de incremento das próprias expectativas dos cidadãos, em que a ideia de equidade é valorada. Atendendo a este contexto, é natural que em diversos processos de tomada de decisão haja alguma tensão entre estes dois princípios, questionando-se sobre quanto é que se deve sacrificar da equidade a favor da eficiência e vice-versa. A presente investigação filia nestas inquietações. O argumento subjacente é o de que o princípio de Coesão Territorial, enquanto novo paradigma de desenvolvimento do território europeu e um dos mais recentes objetivos políticos da Comissão e dos estados-membros, contribui para ajudar a ponderar a relação equidade/eficiência em processos de decisão política sobre provisão de Serviços de Interesse Geral. A linha condutora de investigação centra-se na saúde (em geral) e nos cuidados de saúde (em particular) como exemplo de um serviço que, dada a sua importância na sociedade, justifica uma atenção especial das políticas públicas, mas que tem sido alvo de debate político e académico e de reorganização da sua estrutura na tentativa de diminuição dos custos associados, com repercussões do ponto de vista territorial. A esta questão acresce o facto de que pouco se conhece sobre quais os princípios e os critérios que estão na base de decisões políticas no campo da saúde e qual o papel que o território aqui ocupa. Para compreender se e como a dimensão territorial é considerada na formulação de políticas de saúde, bem como de que forma a adoção do princípio de coesão territorial na formulação de políticas públicas introduz um outro tipo de racionalidade aos processos de tomada de decisão, optou-se por uma metodologia de abordagem essencialmente qualitativa, baseada i) na realização de entrevistas semiestruturadas conduzidas presencialmente a atores-chave da esfera da decisão pública, ii) na análise dos principais instrumentos programáticos das políticas de saúde e iii) na análise de dois estudos de caso (sub-regiões do Baixo Vouga e da Beira Interior Sul). Os resultados alcançados permitem, por um lado, compreender, discutir e clarificar os processos de tomada de decisão em saúde, por outro, justificar o propósito da adoção do princípio de Coesão Territorial na formulação de políticas e, por fim, avançar com linhas de investigação futura sobre Serviços de Interesse Geral e Coesão Territorial.
Resumo:
The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.
Resumo:
Communication and cooperation between billions of neurons underlie the power of the brain. How do complex functions of the brain arise from its cellular constituents? How do groups of neurons self-organize into patterns of activity? These are crucial questions in neuroscience. In order to answer them, it is necessary to have solid theoretical understanding of how single neurons communicate at the microscopic level, and how cooperative activity emerges. In this thesis we aim to understand how complex collective phenomena can arise in a simple model of neuronal networks. We use a model with balanced excitation and inhibition and complex network architecture, and we develop analytical and numerical methods for describing its neuronal dynamics. We study how interaction between neurons generates various collective phenomena, such as spontaneous appearance of network oscillations and seizures, and early warnings of these transitions in neuronal networks. Within our model, we show that phase transitions separate various dynamical regimes, and we investigate the corresponding bifurcations and critical phenomena. It permits us to suggest a qualitative explanation of the Berger effect, and to investigate phenomena such as avalanches, band-pass filter, and stochastic resonance. The role of modular structure in the detection of weak signals is also discussed. Moreover, we find nonlinear excitations that can describe paroxysmal spikes observed in electroencephalograms from epileptic brains. It allows us to propose a method to predict epileptic seizures. Memory and learning are key functions of the brain. There are evidences that these processes result from dynamical changes in the structure of the brain. At the microscopic level, synaptic connections are plastic and are modified according to the dynamics of neurons. Thus, we generalize our cortical model to take into account synaptic plasticity and we show that the repertoire of dynamical regimes becomes richer. In particular, we find mixed-mode oscillations and a chaotic regime in neuronal network dynamics.
Resumo:
Nesta tese abordam-se várias formulações e diferentes métodos para resolver o Problema da Árvore de Suporte de Custo Mínimo com Restrições de Peso (WMST – Weight-constrained Minimum Spanning Tree Problem). Este problema, com aplicações no desenho de redes de comunicações e telecomunicações, é um problema de Otimização Combinatória NP-difícil. O Problema WMST consiste em determinar, numa rede com custos e pesos associados às arestas, uma árvore de suporte de custo mínimo de tal forma que o seu peso total não exceda um dado limite especificado. Apresentam-se e comparam-se várias formulações para o problema. Uma delas é usada para desenvolver um procedimento com introdução de cortes baseado em separação e que se tornou bastante útil na obtenção de soluções para o problema. Tendo como propósito fortalecer as formulações apresentadas, introduzem-se novas classes de desigualdades válidas que foram adaptadas das conhecidas desigualdades de cobertura, desigualdades de cobertura estendida e desigualdades de cobertura levantada. As novas desigualdades incorporam a informação de dois conjuntos de soluções: o conjunto das árvores de suporte e o conjunto saco-mochila. Apresentam-se diversos algoritmos heurísticos de separação que nos permitem usar as desigualdades válidas propostas de forma eficiente. Com base na decomposição Lagrangeana, apresentam-se e comparam-se algoritmos simples, mas eficientes, que podem ser usados para calcular limites inferiores e superiores para o valor ótimo do WMST. Entre eles encontram-se dois novos algoritmos: um baseado na convexidade da função Lagrangeana e outro que faz uso da inclusão de desigualdades válidas. Com o objetivo de obter soluções aproximadas para o Problema WMST usam-se métodos heurísticos para encontrar uma solução inteira admissível. Os métodos heurísticos apresentados são baseados nas estratégias Feasibility Pump e Local Branching. Apresentam-se resultados computacionais usando todos os métodos apresentados. Os resultados mostram que os diferentes métodos apresentados são bastante eficientes para encontrar soluções para o Problema WMST.