976 resultados para GONDWANA MARGIN
Resumo:
Seismic ambient noise tomography is applied to central and southern Mozambique, located in the tip of the East African Rift (EAR). The deployment of MOZART seismic network, with a total of 30 broad-band stations continuously recording for 26 months, allowed us to carry out the first tomographic study of the crust under this region, which until now remained largely unexplored at this scale. From cross-correlations extracted from coherent noise we obtained Rayleigh wave group velocity dispersion curves for the period range 5–40 s. These dispersion relations were inverted to produce group velocity maps, and 1-D shear wave velocity profiles at selected points. High group velocities are observed at all periods on the eastern edge of the Kaapvaal and Zimbabwe cratons, in agreement with the findings of previous studies. Further east, a pronounced slow anomaly is observed in central and southern Mozambique, where the rifting between southern Africa and Antarctica created a passive margin in the Mesozoic, and further rifting is currently happening as a result of the southward propagation of the EAR. In this study, we also addressed the question concerning the nature of the crust (continental versus oceanic) in the Mozambique Coastal Plains (MCP), still in debate. Our data do not support previous suggestions that the MCP are floored by oceanic crust since a shallow Moho could not be detected, and we discuss an alternative explanation for its ocean-like magnetic signature. Our velocity maps suggest that the crystalline basement of the Zimbabwe craton may extend further east well into Mozambique underneath the sediment cover, contrary to what is usually assumed, while further south the Kaapval craton passes into slow rifted crust at the Lebombo monocline as expected. The sharp passage from fast crust to slow crust on the northern part of the study area coincides with the seismically active NNE-SSW Urema rift, while further south the Mazenga graben adopts an N-S direction parallel to the eastern limit of the Kaapvaal craton. We conclude that these two extensional structures herald the southward continuation of the EAR, and infer a structural control of the transition between the two types of crust on the ongoing deformation.
Resumo:
In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2. © Author(s) 2015.
Resumo:
A prática de exercício físico é considerado condição essencial para a manutenção de uma boa saúde. A faixa etária de frequentadores de ginásios inclui utentes desde os 8 aos 80 anos, incluindo assim os grupos mais sensíveis à poluição do ar interior. Embora exista legislação específica para ginásios, nomeadamente para as condições de implementação, a mesma é reduzida e não contempla a qualidade do ar interior (QAI). O objetivo geral deste estudo consistiu na avaliação da QAI de quatro ginásios existentes na área metropolitana do Porto. O período de amostragem realizou-se entre 2 de Maio e 20 de Junho 2014 e, após a caracterização dos ginásios, foram monitorizados os seguintes parâmetros: partículas ultrafinas (< 100 nm), matéria particulada suspensa no ar de frações PM1, PM2,5, PM4 e PM10, dióxido de carbono, monóxido de carbono, ozono, compostos orgânicos voláteis, formaldeído, temperatura ambiente e humidade relativa durante 24 h/dia em salas com diferentes actividades (sala de musculação e cardiofitness e sala de aulas de grupo). Os resultados da avaliação dos parâmetros físicos e químicos foram comparados com os limiares de proteção e margem de tolerância do Decreto-Lei nº 118/2013 de 20 de Agosto, a Portaria nº 353-A/2013 de 4 de Dezembro e o Diploma que regula a construção, instalação e funcionamento dos ginásios. Os poluentes com maiores níveis de excedência são o dióxido de carbono, compostos orgânicos voláteis e as partículas PM2,5. As excedências devem-se essencialmente à sobrelotação das salas, excesso de atividade física e ventilação insuficiente. A localização da instalação dos ginásios é também um fator de extrema importância, sendo recomendado que este se situe em local pouco influenciado pelo tráfego automóvel, assim como, afastado de locais de possível interferência devido às atividades presentes, como é o caso da restauração existente em centros comerciais.
Resumo:
The existence of satellite images ofthe West Iberian Margin allowed comparative study of images as a tool applied to structural geology. Interpretation of LANDSAT images of the Lusitanian Basin domain showed the existence of a not previously described WNW-ESE trending set oflineaments. These lineaments are persistent and only observable on small scale images (e.g. approx. 11200000 and 11500 000) with various radiometric characteristics. They are approximately 20 km long, trend l200±15° and cross cut any other families oflineaments. The fact that these lineaments are perpendicular to the Quaternary thrusts of the Lower Tagus Valley and also because they show no off-set across them, suggests that they resulted from intersection oflarge tensile fractures on the earth's surface. It is proposed in this work that these lineaments formed on a crustal flexure of tens ofkm long, associated with the Quaternary WNW-ESE oriented maximum compressive stress on the West Iberian Margin. The maximum compressive stress rotated anticlockwise from a NW -SE orientation to approximately WNW-ESE, from Late Miocene to Quaternary times (RIBEIRO et aI., 1996). Field inspection of the lineaments revealed zones of norm~1.J. faulting and cataclasis, which are coincident with the lineaments and affect sediments of upper Miocene up to Quaternary age. These deformation structures show localized extension perpendicular to the lineaments, i.e. perpendicular to the maximum compressive direction, after recent stress data along the West Portuguese Margin (CABRAL & RIBEIRO, 1989; RIBEIRO et at., 1996). Also, on a first approach, the geographical distribution of these lineaments correlates well with earthquake epicenters and areas of largest Quaternary Vertical Movements within the inverted Lusitanian Basin (CABRAL, 1995).
Resumo:
Life-Cycle Civil Engineering – Biondini & Frangopol
Resumo:
A thesis submitted in fulfilment of the requirements for the Degree of Doctor of Philosophy in Sanitary Engineering in the Faculty of Sciences and Technology of the New University of Lisbon
Resumo:
A presente dissertação apresenta o estudo de previsão do diagrama de carga de subestações da Rede Elétrica Nacional (REN) utilizando redes neuronais, com o intuito de verificar a viabilidade do método utilizado, em estudos futuros. Atualmente, a energia elétrica é um bem essencial e desempenha um papel fundamental, tanto a nível económico do país, como a nível de conforto e satisfação individual. Com o desenvolvimento do setor elétrico e o aumento dos produtores torna-se importante a realização da previsão de diagramas de carga, contribuindo para a eficiência das empresas. Esta dissertação tem como objetivo a utilização do modelo das redes neuronais artificiais (RNA) para criar uma rede capaz de realizar a previsão de diagramas de carga, com a finalidade de oferecer a possibilidade de redução de custos e gastos, e a melhoria de qualidade e fiabilidade. Ao longo do trabalho são utilizados dados da carga (em MW), obtidos da REN, da subestação da Prelada e dados como a temperatura, humidade, vento e luminosidade, entre outros. Os dados foram devidamente tratados com a ajuda do software Excel. Com o software MATLAB são realizados treinos com redes neuronais, através da ferramenta Neural Network Fitting Tool, com o objetivo de obter uma rede que forneça os melhores resultados e posteriormente utiliza-la na previsão de novos dados. No processo de previsão, utilizando dados reais das subestações da Prelada e Ermesinde referentes a Março de 2015, comprova-se que com a utilização de RNA é possível obter dados de previsão credíveis, apesar de não ser uma previsão exata. Deste modo, no que diz respeito à previsão de diagramas de carga, as RNA são um bom método a utilizar, uma vez que fornecem, à parte interessada, uma boa previsão do consumo e comportamento das cargas elétricas. Com a finalização deste estudo os resultados obtidos são no mínimo satisfatórios. Consegue-se alcançar através das RNA resultados próximos aos valores que eram esperados, embora não exatamente iguais devido à existência de uma margem de erro na aprendizagem da rede neuronal.
Resumo:
O objetivo geral deste trabalho é a Análise do Desempenho na Administração Pública com recurso a Rácios Financeiros (Caso do Município de Matosinhos). Neste estudo iremos fazer uma análise económica e financeira do Município de Matosinhos avaliando seu o desempenho nos períodos de 2011 a 2014 e também, iremos analisar alguns fatores que influenciam a estrutura de capital dos 12 Municípios de grande dimensão e o seu desempenho. Quanto à análise económica e financeira do Município de Matosinhos, os resultados mostram, que a curto prazo é possível afirmar que o Município de Matosinhos se encontra numa situação favorável em termos de liquidez, com uma boa margem de segurança, ou seja, consegue solver os compromissos a curto. Verifica-se que o Município de Matosinhos ao longo do quadriénio foi recorrendo cada vez menos a capitais alheios para conseguir financiar os seus ativos, tendência positiva em termos do equilíbrio da estrutura financeira municipal. Tentando confirmar a existência ou inexistência de uma relação entre a estrutura de capital (endividamento) e o desempenho (rendibilidade do ativo) com os fatores que as influenciam, foi realizada uma análise de correlação não paramétrica de Spearman com recurso ao SPSS versão 21. Ao contrário da hipótese formulada e das conclusões chegadas em grande parte dos estudos efetuados, verifica-se a existência de uma relação negativa a um nível de significância de 5%, entre o nível de endividamento e a dimensão do Município. Quanto a relação o entre o endividamento com composição do ativo e a rendibilidade do ativo, os resultados não são satisfatórios, mostram uma inexistência da relação entre o endividamento e esses fatores. Verifica-se uma correlação positiva para um nível de significância de 1% entre a rendibilidade do ativo e crescimento, ou seja, os Municípios com maior taxa de crescimento apresentam uma maior rendibilidade do ativo. Este resultado confirma-se a nossa hipótese 4. Porém, em relação a associação positiva entre a rendibilidade do Município e a sua dimensão, os resultados evidenciaram uma inexistência de qualquer relação entre as variáveis.
Resumo:
Review of the early literature as well as more recent results show that sulfonamides possess a distinct antimalarial activity. However, when give alone, their action is less marked and slower than that of the antimalarials commonly used in the treatment of the acute attack. Combinations with pyrimethamine provide better results, even in cases of pyrimethamine and chloroquine resistance. This warrants further investigations in an attempt to develop a therapeutic agent suitable for the treatment of such resistant cases. It may also be possible with an appropriate combination of pyrimethamine with a sulfonamide to achieve a satisfactory method for suppressive treatment both in areas with and without pyrimethamine resistance. Three main points must still be carefully studied: 1) the risk of developing malaria resistance against one or both of the components of the combination. 2) The risk of developing bacterial resistance to sulfonamides if these substances are used on a large scale in too low doses. It seems indeed that antimalarial effect with the combination of sufonamides + pyrimethamine can be obtained with doses of sulfonamides which are below those usually employed in bacterial diseases. Since the range of the ratios providing potentiation is rather large, only ratios of the combination sulfonamides: pyrimethamine should be chosen in which an antfbacterial sulfonamidemia is guaranteed. 3) It goes without sayinq that, although both pyrimethamine and modem sulfonamides, when given by themselves, have proved tc possess a large margin of safety, long term administration of their combination should be careful studied from the point of view of possible side effects. Substantial evidence has already been produced to show that the long acting sulfonamide Fanasil (Ro 4-4393) given once or once weekly possesses marked schizonticidal activity against P. falciparum. Although its action is slower than that of 4-aminoquinolines, it may be useful as a second choice drug in semi-immune subjects for the therapy of falciparum malaria. Preliminary results show that, when combined with pyrimethamine, Fanasil is highly effective in suppressing fever and asexual parasitemia due to P. falciparum. Single doses of 1 g Fanasil together with 50 mg pyrimethamine seem to be adequate for the treatment of acute falciparum malaria in semi-immune patients. The onset of action of the combination is much more rapid than that of the single components. Weekly doses of 500 mg Fanasil and 25 mg pyrimeihamine appear to provide satisfactory suppressive effects against P. falciparum at least in East Africa. This combination is active on strains which do not respond satisfactorily to the standard doses of pyrimethamine and/or chloroquine and seems to have a satisfactory sporontocidal effect. Preliminary results indicate that Fanasil alone cannot be recommended for use against the other human malaria parasites. The combination with pyrimethamine appears to be much more effective. East African strains of P. malariae seem to respond better to the combination than do Malayan strains of P. vivax but further trials are required before definite assessment can be made. Fanasil by itself has no gametocytoddal or sporontocidal action but seems to potentiate the effect of pyrimethamine at least on sporogony of P. falciparum.
Resumo:
Pascoa and Seghir (2009) noticed that when collateralized promises become subject to utility penalties on default, Ponzi schemes may occur. However, equilibrium exists in some interesting cases. Under low penalties, equilibrium exists if the collateral does not yield utility (for example, when it is a productive asset or a security). Equilibrium exists also under more severe penalties and collateral utility gains, when the promise or the collateral are nominal assets and the margin requirements are endogenous: relative inflation rates and margin coefficients can make the income effects dominate the penalty effects. An equilibrium refinement avoids no-trade equilibria with unduly repayment beliefs. Our refinement differs from the one used by Dubey, Geanakoplos and Shubik (2005) as it does not eliminate no trade equilibria whose low delivery rates are consistent with the propensity to default of agents that are on the verge of selling.
Resumo:
This paper addresses the impact of payment systems on the rate of technology adoption. We present a model where technological shift is driven by demand uncertainty, increased patients’ benefit, financial variables, and the reimbursement system to providers. Two payment systems are studied: cost reimbursement and (two variants of) DRG. According to the system considered, adoption occurs either when patients’ benefits are large enough or when the differential reimbursement across technologies offsets the cost of adoption. Cost reimbursement leads to higher adoption of the new technology if the rate of reimbursement is high relative to the margin of new vs. old technology reimbursement under DRG. Having larger patient benefits favors more adoption under the cost reimbursement payment system, provided that adoption occurs initially under both payment systems.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Directed Research Internship
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).