888 resultados para Odyssey Stand Alone
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.
Resumo:
The purpose of this paper is to explore how companies that hold carbon trading accounts under European Union Emissions Trading Scheme (EU ETS) respond to the climate change by using disclosures on carbon emissions as a means to generate legitimacy compared to others. The study is based on disclosures made in annual reports and stand-alone sustainability reports of UK listed companies from 2001- 2012. The study uses content analysis to capture both the quality and volume of the carbon disclosures. The results show that there is a significant increase in both the quality and volume of the carbon disclosures after the launch of EU ETS. Companies with carbon trading accounts provide greater detailed disclosures as compared to the others without an account. We also find that company size is positively correlated with the disclosures while the association with the industry produces an inconclusive result.
Resumo:
We present here a straightforward method which can be used to obtain a quantitative indication of an individual research output for an academic. Different versions, selections and options are presented to enable a user to easily calculate values both for stand-alone papers and overall for the collection of outputs for a person. The procedure is particularly useful as a metric to give a quantitative indication of the research output of a person over a time window. Examples are included to show how the method works in practice and how it compares to alternative techniques.
Resumo:
Global warming has attracted attention from all over the world and led to the concern about carbon emission. Kyoto Protocol, as the first major international regulatory emission trading scheme, was introduced in 1997 and outlined the strategies for reducing carbon emission (Ratnatunga et al., 2011). As the increased interest in carbon reduction the Protocol came into force in 2005, currently there are already 191 nations ratifying the Protocol(UNFCCC, 2012). Under the cap-and-trade schemes, each company has its carbon emission target. When company’s carbon emission exceeds the target the company will either face fines or buy emission allowance from other companies. Thus unlike most of the other social and environmental issues carbon emission could trigger cost for companies in introducing low-emission equipment and systems and also emission allowance cost when they emit more than their targets. Despite the importance of carbon emission to companies, carbon emission reporting is still operating under unregulated environment and companies are only required to disclose when it is material either in value or in substances (Miller, 2005, Deegan and Rankin, 1997). Even though there is still an increase in the volume of carbon emission disclosures in company’s financial reports and stand-alone social and environmental reports to show their concern of the environment and also their social responsibility (Peters and Romi, 2009), the motivations behind corporate carbon emission disclosures and whether carbon disclosures have impact on corporate environmental reputation and financial performance have not yet to explore. The problems with carbon emission lie on both the financial side and non-financial side of corporate governance. On one hand corporate needs to spend money in reducing carbon emission or paying penalties when they emit more than allowed. On the other hand as the public are more interested in environmental issues than before carbon emission could also impact on the image of corporate regarding to its environmental performance. The importance of carbon emission issue are beginning to be recognized by companies from different industries as one of the critical issues in supply chain management (Lee, 2011) and 80% of companies analysed are facing carbon risks resulting from emissions in the companies’ supply chain as shown in a study conducted by the Investor Responsibility Research Centre Institute for Corporate Responsibility (IRRCI) and over 80% of the companies analysed found that the majority of greenhouse gas (GHG) emission are from electricity and other direct suppliers (Trucost, 2009). The review of extant literature shows the increased importance of carbon emission issues and the gap in the study of carbon reporting and disclosures and also the study which links corporate environmental reputation and corporate financial performance with carbon reporting (Lohmann, 2009a, Ratnatunga and Balachandran, 2009, Bebbington and Larrinaga-Gonzalez, 2008). This study would focus on investigating the current status of UK carbon emission disclosures, the determinant factors of corporate carbon disclosure, and the relationship between carbon emission disclosures and corporate environmental reputation and financial performance of UK listed companies from 2004-2012 and explore the explanatory power of classical disclosure theories.
Resumo:
The climate over the Arctic has undergone changes in recent decades. In order to evaluate the coupled response of the Arctic system to external and internal forcing, our study focuses on the estimation of regional climate variability and its dependence on large-scale atmospheric and regional ocean circulations. A global ocean–sea ice model with regionally high horizontal resolution is coupled to an atmospheric regional model and global terrestrial hydrology model. This way of coupling divides the global ocean model setup into two different domains: one coupled, where the ocean and the atmosphere are interacting, and one uncoupled, where the ocean model is driven by prescribed atmospheric forcing and runs in a so-called stand-alone mode. Therefore, selecting a specific area for the regional atmosphere implies that the ocean–atmosphere system can develop ‘freely’ in that area, whereas for the rest of the global ocean, the circulation is driven by prescribed atmospheric forcing without any feedbacks. Five different coupled setups are chosen for ensemble simulations. The choice of the coupled domains was done to estimate the influences of the Subtropical Atlantic, Eurasian and North Pacific regions on northern North Atlantic and Arctic climate. Our simulations show that the regional coupled ocean–atmosphere model is sensitive to the choice of the modelled area. The different model configurations reproduce differently both the mean climate and its variability. Only two out of five model setups were able to reproduce the Arctic climate as observed under recent climate conditions (ERA-40 Reanalysis). Evidence is found that the main source of uncertainty for Arctic climate variability and its predictability is the North Pacific. The prescription of North Pacific conditions in the regional model leads to significant correlation with observations, even if the whole North Atlantic is within the coupled model domain. However, the inclusion of the North Pacific area into the coupled system drastically changes the Arctic climate variability to a point where the Arctic Oscillation becomes an ‘internal mode’ of variability and correlations of year-to-year variability with observational data vanish. In line with previous studies, our simulations provide evidence that Arctic sea ice export is mainly due to ‘internal variability’ within the Arctic region. We conclude that the choice of model domains should be based on physical knowledge of the atmospheric and oceanic processes and not on ‘geographic’ reasons. This is particularly the case for areas like the Arctic, which has very complex feedbacks between components of the regional climate system.
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
Research in Bid Tender Forecasting Models (BTFM) has been in progress since the 1950s. None of the developed models were easy-to-use tools for effective use by bidding practitioners because the advanced mathematical apparatus and massive data inputs required. This scenario began to change in 2012 with the development of the Smartbid BTFM, a quite simple model that presents a series of graphs that enables any project manager to study competitors using a relatively short historical tender dataset. However, despite the advantages of this new model, so far, it is still necessary to study all the auction participants as an indivisible group; that is, the original BTFM was not devised for analyzing the behavior of a single bidding competitor or a subgroup of them. The present paper tries to solve that flaw and presents a stand-alone methodology useful for estimating future competitors’ bidding behaviors separately.
Resumo:
This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.
Resumo:
PV-Wind-Hybrid systems for stand-alone applications have the potential to be more cost efficient compared to PV-alone systems. The two energy sources can, to some extent, compensate each others minima. The combination of solar and wind should be especially favorable for locations at high latitudes such as Sweden with a very uneven distribution of solar radiation during the year. In this article PV-Wind-Hybrid systems have been studied for 11 locations in Sweden. These systems supply the household electricity for single family houses. The aim was to evaluate the system costs, the cost of energy generated by the PV-Wind-Hybrid systems, the effect of the load size and to what extent the combination of these two energy sources can reduce the costs compared to a PV-alone system. The study has been performed with the simulation tool HOMER developed by the National Renewable Energy Laboratory (NREL) for techno-economical feasibility studies of hybrid systems. The results from HOMER show that the net present costs (NPC) for a hybrid system designed for an annual load of 6000 kWh with a capacity shortage of 10% will vary between $48,000 and $87,000. Sizing the system for a load of 1800 kWh/year will give a NPC of $17,000 for the best and $33,000 for the worst location. PV-Wind-Hybrid systems are for all locations more cost effective compared to PV-alone systems. Using a Hybrid system is reducing the NPC for Borlänge by 36% and for Lund by 64%. The cost per kWh electricity varies between $1.4 for the worst location and $0.9 for the best location if a PV-Wind-Hybrid system is used.
Resumo:
In recent years the number of bicycles with e-motors has been increased steadily. Within the pedelec – bikes where an e-motor supports the pedaling – a special group of transportation bikes has developed. These bikes have storage boxes in addition to the basic parts of a bike. Due to the space available on top of those boxes it is possible to install a PV system to generate electricity which could be used to recharge the battery of the pedelec. Such a system would lead to grid independent charging of the battery and to the possibility of an increased range of motor support. The feasibility of such a PV system is investigated for a three wheeled pedelec delivered by the company BABBOE NORDIC.The measured data of the electricity generation of this mobile system is compared to the possible electricity generation of a stationary system.To measure the consumption of the pedelec different tracks are covered, and the energy which is necessary to recharge the bike battery is measured using an energy logger. This recharge energy is used as an indirect measure of the electricity consumption. A PV prototype system is installed on the bike. It is a simple PV stand alone system consisting of PV panel, charge controller with MPP tracker and a solar battery. This system has the task to generate as much electricity as possible. The produced PV current and voltage aremeasured and documented using a data logger. Afterwards the average PV power is calculated. To compare the produced electricity of the on-bike system to that of a stationary system, the irradiance on the latter is measured simultaneously. Due to partial shadings on the on-bike PV panel, which are caused by the driver and some other bike parts, the average power output during riding the bike is very low. It is too low to support the motor directly. In case of a similar installation as the PV prototype system and the intention always to park the bike on a sunny spot an on-bike system could generate electricity to at least partly recharge a bike battery during one day. The stationary PV system using the same PV panel could have produced between 1.25 and 8.1 times as much as the on-bike PV system. Even though the investigation is done for a very specific case it can be concluded that anon-bike PV system, using similar components as in the investigation, is not feasible to recharge the battery of a pedelec in an appropriate manner. The biggest barrier is that partial shadings on the PV panel, which can be hardly avoided during operation and parking, result in a significant reduction of generated electricity. Also the installation of the on-bike PV system would lead to increased weight of the whole bike and the need for space which is reducing the storage capacity. To use solar energy for recharging a bike battery an indirect way is giving better results. In this case a stationary PV stand alone system is used which is located in a sunny spot without shadings and adjusted to use the maximum available solar energy. The battery of the bike is charged using the corresponding charger and an inverter which provides AC power using the captured solar energy.
Resumo:
The aim of this study was to investigate electricity supply solutions for an educationalcenter that is being built in Chonyonyo Tanzania. Off-grid power generation solutions andfurther optimization possibilities were studied for the case.The study was done for Engineers Without Borders in Sweden. Who are working withMavuno Project on the educational center. The school is set to start operating in year 2015with 40 girl students in the beginning. The educational center will help to improve genderequality by offering high quality education in a safe environment for girls in rural area.It is important for the system to be economically and environmentally sustainable. Thearea has great potential for photovoltaic power generation. Thus PV was considered as theprimary power generation and a diesel generator as a reliable backup. The system sizeoptimization was done with HOMER. For the simulations HOMER required componentdata, weather data and load data. Common components were chose with standardproperties, the loads were based on load estimations from year 2011 and the weather datawas acquired from NASA database. The system size optimization result for this base casewas a system with 26 kW PW; 5.5 kW diesel generator, 15 kW converter and 112 T-105batteries. The initial cost of the system was 55 875 €, the total net present cost 92 121 €and the levelized cost of electricity 0.264 €/kWh.In addition three optimization possibilities were studied. First it was studied how thesystem should be designed and how it would affect the system size to have night loads(security lights) use DC and could the system then be extended in blocks. As a result it wasfound out that the system size could be decreased as the inverter losses would be avoided.Also the system extension in blocks was found to be possible. The second study was aboutinverter stacking where multiple inverters can work as one unit. This type of connectionallows only the required number of inverters to run while shutting down the excess ones.This would allow the converter-unit to run with higher efficiency and lower powerconsumption could be achieved. In future with higher loads the system could be easilyextendable by connecting more inverters either in parallel or series depending on what isneeded. Multiple inverters would also offer higher reliability than using one centralizedinverter. The third study examined how the choice of location for a centralized powergeneration affects the cable sizing for the system. As a result it was found that centralizedpower generation should be located close to high loads in order to avoid long runs of thickcables. Future loads should also be considered when choosing the location. For theeducational center the potential locations for centralized power generation were found outto be close to the school buildings and close to the dormitories.
Resumo:
This research investigates the factors that lead Latin American non-financial firms to manage risks using derivatives. The main focus is on currency risk management. With this purpose, this thesis is divided into an introduction and two main chapters, which have been written as stand-alone papers. The first paper describes the results of a survey on derivatives usage and risk management responded by the CFOs of 74 Brazilian non-financial firms listed at the São Paulo Stock Exchange (BOVESPA), and the main evidence found is: i) larger firms are more likely to use financial derivatives; ii) foreign exchange risk is the most managed with derivatives; iii) Brazilian managers are more concerned with legal and institutional aspects in using derivatives, such as the taxation and accounting treatment of these instruments, than with issues related to implementing and maintaining a risk management program using derivatives. The second paper studies the determinants of risk management with derivatives in four Latin American countries (Argentina, Brazil, Chile and Mexico). I investigate not only the decision of whether to use financial derivatives or not, but also the magnitude of risk management, measured by the notional value of outstanding derivatives contracts. This is the first study, to the best of my knowledge, to use derivatives holdings information in emerging markets. The use of a multi-country setting allows the analysis of institutional and economic factors, such as foreign currency indebtedness, the high volatility of exchange rates, the instability of political and institutional framework and the development of financial markets, which are issues of second-order importance in developed markets. The main contribution of the second paper is on the understanding of the relationship among currency derivatives usage, foreign debt and the sensitivity of operational earnings to currency fluctuations in Latin American countries. Unlikely previous findings for US firms, my evidence shows that derivatives held by Latin American firms are capable of producing cash flows comparable to financial expenses and investments, showing that derivatives are key instruments in their risk management strategies. It is also the first work to show strong and robust evidence that firms that benefit from local currency devaluation (e.g. exporters) have a natural currency hedge for foreign debt that allows them to bear higher levels of debt in foreign currency. This implies that firms under this revenue-cost structure require lower levels of hedging with derivatives. The findings also provide evidence that large firms are more likely to use derivatives, but the magnitude of derivatives holdings seems to be unrelated to the size of the firm, consistent with findings for US firms.
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
Esta tese tem por objetivo examinar as características do processo de decisão em que credores optam pela recuperação judicial ou liquidação da empresa em dificuldade financeira. O trabalho está dividido em quatro capítulos. No segundo capítulo, apresenta-se, de forma sistematizada, referencial teórico e evidências empíricas para apontar resultados importantes sobre estudos desenvolvidos nas áreas de recuperação de empresas e falência. O capítulo também apresenta três estudos de caso com o propósito de mostrar a complexidade de cada caso no que diz respeito à concentração de recursos, conflito de interesse entre as classes de credores e a decisão final sobre a aprovação ou rejeição do plano de recuperação judicial. No terceiro capítulo, analisam-se os determinantes do atraso pertinente à votação do plano de recuperação judicial. O trabalho propõe um estudo empírico dos atrasos entre 2005 e 2014. Os resultados sugerem que: (i) maior concentração da dívida entre as classes de credores possui relação com atrasos menores; (ii) maior quantidade de bancos para votar o plano de recuperação judicial possui relação com maiores atrasos; (iii) o atraso médio na votação diminui quando apenas uma classe de credores participa da votação do plano; (iv) credores trabalhistas e com garantia real atrasam a votação quando o valor dos ativos para garantir a dívida em caso de liquidação é maior; (v) o atraso médio na votação é maior em casos de pior desempenho do setor de atuação do devedor, sendo solicitado pelas classes quirografária e com garantia real; e (vi) a proposta de venda de ativos é o principal tópico discutido nas reuniões de votação do plano nos casos em que o atraso na votação é maior. Por fim, no quarto capítulo, apresenta-se evidência sobre a votação dos credores e a probabilidade de aprovação do plano de recuperação judicial. Os resultados sugerem que: (i) credores trabalhistas estão propensos a aprovar o plano de recuperação mesmo quando o plano é rejeitado pelas demais classes; (ii) planos com propostas de pagamento mais heterogêneas para as três classes de credores possuem menor chance de serem aceitos; (iii) a chance de aprovação do plano diminui nos casos em que mais credores quirografários participam da recuperação; e (iv) planos com proposta de venda de ativos possuem maior chance de serem aprovados. Finalmente, maior concentração da dívida na classe com garantia real diminui a chance de aprovação do plano, e o contrário ocorre na classe quirografária.
Resumo:
A larvicultura da maioria das espécies de peixes enfrenta o desafio da dependência do alimento vivo (AL) e da falta de dietas formuladas (DF) que atendam plenamente às necessidades das larvas. A baixa digestibilidade e a qualidade nutricional das DFs são alguns dos fatores que explicam o insucesso quando as larvas recebem apenas FD. Para avaliar o efeito da combinação da DF com o AL no crescimento e na sobrevivência de larvas de jundiá (Rhamdia quelen), comparando com o uso separado da DF ou do AL, larvas recém eclodidas (5,57 mm; 1,41 mg) foram estocadas inicialmente em 12 aquários de 10 L (100 larvas por aquário). Quatro réplicas foram alimentadas ad libitum com uma das três dietas por 20 (para DF) ou 48 dias (para AL ou a combinação DF + AL). As larvas alimentadas com apenas DF apresentaram crescimento e sobrevivência reduzidos quando comparadas àquelas alimentadas com AL ou a combinação DF + AL. Adicionalmente, as larvas do tratamento DF + AL apresentaram maior crescimento em peso (170 mg) que aquelas alimentadas apenas com AL (110 mg). O melhor desempenho das larvas alimentadas com DF + AL mostra que a maioria dos nutrientes exigidos pelas larvas é fornecida mais adequadamente quando ambas as dietas são fornecidas juntamente. Contudo, trabalhos sobre nutrição larval poderão contribuir ainda mais sobre a elucidação deste tema quando feitas comparações com o uso combinado de DF + AL, do que apenas testando isoladamente novos ingredientes e fontes protéicas normalmente utilizadas na elaboração de dietas para juvenis e adultos.