943 resultados para Weighted average power tests
Resumo:
The continental shelf adjacent to the Río de la Plata (RdlP) exhibits extremely complex hydrographic and ecological characteristics which are of great socioeconomic importance. Since the long-term environmental variations related to the atmospheric (wind fields), hydrologic (freshwater plume), and oceanographic (currents and fronts) regimes are little known, the aim of this study is to reconstruct the changes in the terrigenous input into the inner continental shelf during the late Holocene period (associated with the RdlP sediment discharge) and to unravel the climatic forcing mechanisms behind them. To achieve this, we retrieved a 10 m long sediment core from the RdlP mud depocenter at 57 m water depth (GeoB 13813-4). The radiocarbon age control indicated an extremely high sedimentation rate of 0.8 cm per year, encompassing the past 1200 years (AD 750-2000). We used element ratios (Ti / Ca, Fe / Ca, Ti / Al, Fe / K) as regional proxies for the fluvial input signal and the variations in relative abundance of salinity-indicative diatom groups (freshwater versus marine-brackish) to assess the variability in terrigenous freshwater and sediment discharges. Ti / Ca, Fe / Ca, Ti / Al, Fe / K and the freshwater diatom group showed the lowest values between AD 850 and 1300, while the highest values occurred between AD 1300 and 1850. The variations in the sedimentary record can be attributed to the Medieval Climatic Anomaly (MCA) and the Little Ice Age (LIA), both of which had a significant impact on rainfall and wind patterns over the region. During the MCA, a weakening of the South American summer monsoon system (SAMS) and the South Atlantic Convergence Zone (SACZ), could explain the lowest element ratios (indicative of a lower terrigenous input) and a marine-dominated diatom record, both indicative of a reduced RdlP freshwater plume. In contrast, during the LIA, a strengthening of SAMS and SACZ may have led to an expansion of the RdlP river plume to the far north, as indicated by higher element ratios and a marked freshwater diatom signal. Furthermore, a possible multidecadal oscillation probably associated with Atlantic Multidecadal Oscillation (AMO) since AD 1300 reflects the variability in both the SAMS and SACZ systems.
Resumo:
The Precambrian basement beneath the Pechora Basin of northern Russia is known from deep (up to approx. 4.5 km) drill holes to be largely composed of Neoproterozoic successions, variously deformed and metamorphosed and intruded by magmatic suites of Vendian age. Presented here are new single- zircon, Pb-evaporation (Kober method) ages from eight intrusions across the Izhma, Pechora and Bolshezemel'skaya Zones, all from below the Lower Ordovician (locally Middle Cambrian) unconformity. The majority of the intrusions (six) yield remarkably similar ages of 550-560 Ma, apparently dating a widespread pulse of late- to post-tectonic magmatism. An early Vendian granite (618 Ma) has been identified in the northeasternmost region (Bolshezemel'skaya zone) and a Devonian granodiorite (380 Ma) in the Pechora Zone, where mid to late Palaeozoic magmatism has been previously reported. Evidence of inheritance in the zircon populations suggests the presence of Mesoproterozoic crust beneath the Neoproterozoic complexes.
Resumo:
Twenty-one core samples from DSDP/IPOD Leg 63 were analyzed for products of chlorophyll diagenesis. In addition to the tetrapyrrole pigments, perylene and carotenoid pigments were isolated and identified. The 16 core samples from the San Miguel Gap site (467) and the five from the Baja California borderland location (471) afforded the unique opportunity of examining tetrapyrrole diagenesis in clay-rich marine sediments that are very high in total organic matter. The chelation reaction, whereby free-base porphyrins give rise to metalloporphyrins (viz., nickel), is well documented within the downhole sequence of sediments from the San Miguel Gap (Site 467). Recognition of unique arrays of highly dealkylated copper and nickel ETIO-porphyrins, exhibiting nearly identical carbonnumber homologies (viz., C-23 to C-30; mode = C-26), enabled subtraction of this component (thought to be derived from an allochthonous source) and thus permitted description of the actual in situ diagenesis of autochthonous chlorophyll derivatives.
Resumo:
Although we know there exists a simple approach to solve the circularity between value and the discount rate, known as the Adjusted Present Value proposed by Myers, 1974, it seems that practitioners still rely on the traditional Weighted Average Cost of Capital, WACC approach of weighting the cost of debt, Kd and the cost of equity, Ke and discounting the Free Cash Flow, FCF. We show how to solve circularity when calculating value with the free cash flow, FCF and the WACC. As a result of the solution we arrive at a known solution when we assume the discount rate of the tax equity: the capital cash flow, CCF discounted at Ku. When assuming Kd as the discount rate for the tax savings, we find an expression for calculating value that does not implies circularity. We do this for a single period and for N periods.
Resumo:
El presente documento corresponde a la valoración del Holding InRetail Perú Corp. a través del método de Flujo de Caja Descontado. InRetail Perú Corp. (en adelante InRetail o la compañía), es un conglomerado de empresas líderes en el sector retail y cuyas operaciones se concentran en Perú a través de tres unidades de negocio: supermercados, farmacias y centros comerciales. Asimismo, sobre la base de sus productos y servicios, la compañía se encuentra organizada en dos grupos: (i) InRetail Consumer (agrupa a supermercados y farmacias) e (ii) InRetail Real Estate Corp. (administra los centros comerciales). La metodología empleada en la valorización es el flujo de caja libre (FCL), la cual es comúnmente utilizada para valorizar a empresas con expectativas de continuidad. Este método también trata de determinar el valor de la empresa a través de la estimación de flujos de dinero (cash flows) que la empresa generará en el futuro, para luego traerlos valor presente a una tasa apropiada según el riesgo de dichos flujos. Esa tasa es el costo medio ponderado de capital (WACC - weighted average cost of capital). Cabe indicar que los datos utilizados para la valorización son los que se encuentran al cierre 2014, que se consultaron diversos medios especializados en el negocio minorista y que se entrevistó a un experto en valorizaciones del sector retail. Por otra parte, se desarrollaron dos valorizaciones: (i) la valorización consolidada, en la que se valorizó a la compañía en su conjunto a través de la metodología del FCL; y (ii) la valorización individual, en la que se valorizó cada unidad de negocio a través de la metodología de múltiplos. En la valorización individual también se utilizó la metodología del FCL; sin embargo, se descartó porque información financiera para cada negocio es limitada. Finalmente, luego del análisis de valorización efectuado a InRetail Perú Corp se concluye en recomendar la compra de la acción, debido a la baja penetración del retail formal en el Perú. Esta es la razón principal del potencial de crecimiento que tiene este sector económico; asimismo, el liderazgo que mantiene en los negocios que desarrolla ayudará a que las ventas sigan creciendo y la compañía sea cada vez más sólida en un sector en el que la competencia es agresiva.
Resumo:
O crescimento da demanda energética, prevista para a metade do século XXI, com números embasados no crescimento demográfico e de consumo dos países em desenvol- vimento, sugere a busca por fontes energéticas renováveis e de menor impacto ao meio ambiente, conforme os tratados da política internacional. Portanto, o fornecimento de energia suplementar se torna vital nas sociedades modernas e sua extensão até o mar tem se constituído uma recente preocupação do ponto de vista enérgico e ecológico. Várias formas de conversão de energia foram desenvolvidas no decorrer dos anos, com destaque para a energia dos gradientes térmicos. A Plataforma Continental Sul do Bra- sil (PSCB) possui alta variabilidade espacial e temporal nos campos de temperatura, de forma que existe a necessidade de uma análise das regiões de maior potencial energético com respeito ao gradiente vertical de temperatura. Neste estudo, foram utilizados dados do modelo OCCAM com uma grade de resolu- ção horizontal de 0, 25o e resolução vertical de 66 níveis, distribuídos ao longo de um sistema de coordenadas vertical. Foram utilizadas imagens de temperatura superfícial do mar (TSM) obtidas a partir do sensor AVHRR (Advanced Very High Resolution Ra- diometer) de forma a realizar a validação dos dados do modelo OCCAM. A análise da média dos dados do modelo indicou um sítio energético de maior viabilidade devido oC ao padrão médio do gradiente térmico de aproximadamente 0, 17 ao longo da coluna vertical (545 m de profundidade) no oceano. Neste local, foram coletados os dados, e aplicados a um módulo de conversão de energia térmica dos oceanos que vem sendo desenvolvido na Universidade Federal do Rio Grande - FURG. A região de estudo de- monstrou possuir um local com ótimo potencial energético, onde a produção máxima de energia pode alcançar 111, 9MW , associada com um padrão variabilidade tempo- ral dominante de 12 meses. Este sítio energético demonstra maior eficiência durante o período de verão e outono ao longo dos anos e sua média para todo o período é de 94, 3MW . Neste estudo, duas correntes: Corrente do Brasil (CB) e a Contra Corrente Costeira (CCC), com águas de origem tropical e subantártica com aportes continentais, respecti- vamente, tem alta correlação com os valores dos gradientes térmicos e com os significa- tivos eventos de conversão energética. O sítio energético demonstrou alta estabilidade à sazonalidade e à gama de eventos meteorológicos e oceanográficos, de forma que pode ser qualificado como uma fonte suplementar a matriz energética do país para um futuro próximo.
Resumo:
O contínuo crescimento da população mundial aumenta a demanda e a competição por energia, colocando grande esforço sobre as fontes de energia não renováveis existentes. Devido a isso, políticas globais para geração de energias renováveis e menos poluentes estão sendo fortalecidas, além de promoverem o desenvolvimento de novas tecnologias. Várias formas de conversão de energia foram desenvolvidas no decorrer dos anos, com destaque para os conversores de energia das correntes a base de turbinas, que demonstram alta capacidade de conversão energética e já se encontram em funcionamento. O modelo tridimensional TELEMAC3D foi utilizado para a investigação dos processos hidrodinâmicos. Este modelo foi acoplado ao módulo de conversão de energia para as análises nos locais de maior viabilidade e conversão energética na Plataforma Continental do Sul do Brasil. A região de estudo demonstrou possuir duas regiões com alto potencial para a exploração da energias das correntes marinhas, entretanto a região mais viável para a instalação de conversores de corrente é a região norte delimitada entre o Farol da Conceição e o Farol da Solidão, podendo atingir potência média de 10kW=Dia, e alcançando valores integrados de 3:5MW=Ano. Através de uma análise da sazonalidade foram observados, durante a primavera os períodos mais energéticos em ambas as regiões estudadas. As maiores intensidades de conversão de energia foram estimadas com variabilidade temporal de 16 dias, demonstrando alta correlação com eventos associados à passagem de frentes meteorológicas na região. O sítio da região norte, com a presença de barreiras que representam a forma dos conversores, se destaca mantendo boa conversão durante os eventos de ótimo potencial energético. Esta melhora se deve ao efeito de intensificação do campo de correntes associado à presença da estrutura física que otimiza a eficiência do sítio. Não foram observadas diferenças significativas no padrão de variabilidade temporal das simulações estudadas, indicando que a presença das barreiras não induz grandes alterações no padrão temporal da conversão de energia nas escalas temporais analisadas neste trabalho. Os eventos de alta geração de energia foram relacionados a incidência de fortes ventos de quadrante sul e norte, indicando que pelo formato e disposição dos conversores, ventos de sudoeste e norte podem favorecer ótimos eventos de conversão de energia. As simulações dos sítios de conversão demonstraram alta capacidade de geração energética, com quatro eventos de extrema geração de energia. Entretanto, o sítio da região norte demonstrou eficiência superior a 59,39 GWh ao ano, equivalendo a 0.22% do consumo energético do estado do Rio Grande do Sul no ano de 2010.
Resumo:
Os atuais esquemas de modulação e acesso ao meio, tais como o Wide- Band Code-Division Multiple Access (WCDMA) ou Orthogonal Frequency- Division Multiple Access (OFDMA), que são otimizados para a gestão eficiente do espetro electromagnético e elevada taxa de transmissão, originam sinais de elevado Peak-to-Average Power Ratio (PAPR) e requisitos de linearidade rigorosos. As arquiteturas de amplificação tradicionais, i.e. baseadas no operação em modo de corrente do dispositivo ativo, são incapazes de satisfazer estes requisitos em simultâneo. Assim, o amplificador de potência (do inglês, Power Ampli_er (PA)) incorre numa degradação significativa de rendimento energético em favor de maior linearidade, aumentando simultaneamente os custos de operação das estacões base para os operadores de telecomunicações móveis e o impacte ambiental. Este trabalho foca-se no estudo da arquitetura Doherty, a principal solução encontrada para melhorar o compromisso linearidade/rendimento para aplicações em estações-base de comunicações móveis. Para tal, são expostos os princípios básicos de amplificadores de rádio frequência assim como a análise teórica do tradicional PA Doherty (do inglês, Doherty Power Amplifier (DhPA)) de duas vias e suas variantes. O estudo _e complementado com o projeto e implementação de um PA excitador, em classe-AB, e de um DhPA de elevada potência, colocando-se em prática a teoria e técnicas de projeto estudadas ao longo deste trabalho, aliadas aos desafios da implementação com dispositivos reais de elevada potência.
Resumo:
The financial crisis of 2007-2008 led to extraordinary government intervention in firms and markets. The scope and depth of government action rivaled that of the Great Depression. Many traded markets experienced dramatic declines in liquidity leading to the existence of conditions normally assumed to be promptly removed via the actions of profit seeking arbitrageurs. These extreme events motivate the three essays in this work. The first essay seeks and fails to find evidence of investor behavior consistent with the broad 'Too Big To Fail' policies enacted during the crisis by government agents. Only in limited circumstances, where government guarantees such as deposit insurance or U.S. Treasury lending lines already existed, did investors impart a premium to the debt security prices of firms under stress. The second essay introduces the Inflation Indexed Swap Basis (IIS Basis) in examining the large differences between cash and derivative markets based upon future U.S. inflation as measured by the Consumer Price Index (CPI). It reports the consistent positive value of this measure as well as the very large positive values it reached in the fourth quarter of 2008 after Lehman Brothers went bankrupt. It concludes that the IIS Basis continues to exist due to limitations in market liquidity and hedging alternatives. The third essay explores the methodology of performing debt based event studies utilizing credit default swaps (CDS). It provides practical implementation advice to researchers to address limited source data and/or small target firm sample size.
Resumo:
International audience
Resumo:
Passive sampling devices (PS) are widely used for pollutant monitoring in water, but estimation of measurement uncertainties by PS has seldom been undertaken. The aim of this work was to identify key parameters governing PS measurements of metals and their dispersion. We report the results of an in situ intercomparison exercise on diffusive gradient in thin films (DGT) in surface waters. Interlaboratory uncertainties of time-weighted average (TWA) concentrations were satisfactory (from 28% to 112%) given the number of participating laboratories (10) and ultra-trace metal concentrations involved. Data dispersion of TWA concentrations was mainly explained by uncertainties generated during DGT handling and analytical procedure steps. We highlight that DGT handling is critical for metals such as Cd, Cr and Zn, implying that DGT assembly/dismantling should be performed in very clean conditions. Using a unique dataset, we demonstrated that DGT markedly lowered the LOQ in comparison to spot sampling and stressed the need for accurate data calculation.
Resumo:
Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.
Resumo:
Introduction: Chromium is an essential trace mineral for carbohydrate and lipid metabolism, which is currently prescribed to control diabetes mellitus. Results of previous systematic reviews and meta-analyses of chromium supplementation and metabolic profiles in diabetes have been inconsistent. Aim: The objective of this meta-analysis was to assess the effects on metabolic profiles and safety of chromium supplementation in type 2 diabetes mellitus and cholesterol. Methods: Literature searches in PubMed, Scopus and Web of Science were made by use of related terms-keywords and randomized clinical trials during the period of 2000-2014. Results: Thirteen trials fulfilled the inclusion criteria and were included in this systematic review. Total doses of Cr supplementation and brewer's yeast ranged from 42 to 1,000 µg/day, and duration of supplementation ranged from 30 to 120 days. The analysis indicated that there was a significant effect of chromium supplementation in diabetics on fasting plasma glucose with a weighted average effect size of -29.26 mg/dL, p = 0.01, CI 95% = -52.4 to -6.09; and on total cholesterol with a weighted average effect size of -6.7 mg/dL, p = 0.01, CI 95% = -11.88 to -1.53. Conclusions: The available evidence suggests favourable effects of chromium supplementation on glycaemic control in patients with diabetes. Chromium supplementation may additionally improve total cholesterol levels.
Resumo:
This paper reports on a low frequency piezoelectric energy harvester that scavenges energy from a wire carrying an AC current. The harvester is described, fabricated and characterized. The device consists of a silicon cantilever with integrated piezoelectric capacitor and proof-mass that incorporates a permanent magnet. When brought close to an AC current carrying wire, the magnet couples to the AC magnetic field from a wire, causing the cantilever to vibrate and generate power. The measured average power dissipated across an optimal resistive load was 1.5 μW. This was obtained by exciting the device into mechanical resonance using the electro-magnetic field from the 2 A source current. The measurements also reveal that the device has a nonlinear response that is due to a spring hardening mechanism.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.