958 resultados para price of electricity
Resumo:
The variables involved in the equations that describe realistic synaptic dynamics always vary in a limited range. Their boundedness makes the synapses forgetful, not for the mere passage of time, but because new experiences overwrite old memories. The forgetting rate depends on how many synapses are modified by each new experience: many changes means fast learning and fast forgetting, whereas few changes means slow learning and long memory retention. Reducing the average number of modified synapses can extend the memory span at the price of a reduced amount of information stored when a new experience is memorized. Every trick which allows to slow down the learning process in a smart way can improve the memory performance. We review some of the tricks that allow to elude fast forgetting (oblivion). They are based on the stochastic selection of the synapses whose modifications are actually consolidated following each new experience. In practice only a randomly selected, small fraction of the synapses eligible for an update are actually modified. This allows to acquire the amount of information necessary to retrieve the memory without compromising the retention of old experiences. The fraction of modified synapses can be further reduced in a smart way by changing synapses only when it is really necessary, i.e. when the post-synaptic neuron does not respond as desired. Finally we show that such a stochastic selection emerges naturally from spike driven synaptic dynamics which read noisy pre and post-synaptic neural activities. These activities can actually be generated by a chaotic system.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
A series of aluminum alloys containing additions of scandium, zirconium, and ytterbium were cast to evaluate the effect of partial ytterbium substitution for scandium on tensile behavior. Due to the high price of scandium, a crucible-melt interaction study was performed to ensure no scandium was lost in graphite, alumina, magnesia, or zirconia crucibles after holding a liquid Al-Sc master alloy for 8 hours at 900 °C in an argon atmosphere. The alloys were subjected to an isochronal aging treatment and tested for conductivity and Vickers microhardness after each increment. For scandium-containing alloys, peak hardnesses of 520-790 MPa, and peak tensile stresses of 138-234 MPa were observed after aging from 150-350 °C for 3 hours in increments of 50 °C, and for alloys without scandium, peak hardnesses of 217-335 MPa and peak tensile stresses of 45-63 MPa were observed after a 3 hour, 150 °C aging treatment. The hardness and tensile strength of the ytterbium containing alloy was found to be lower than in the alloy with no ytterbium substitution.
Resumo:
Algae are considered a promising source of biofuels in the future. However, the environmental impact of algae-based fuel has high variability in previous LCA studies due to lack of accurate data from researchers and industry. The National Alliance for Advanced Biofuels and Bioproducts (NAABB) project was designed to produce and evaluate new technologies that can be implemented by the algal biofuel industry and establish the overall process sustainability. The MTU research group within NAABB worked on the environmental sustainability part of the consortium with UOP-Honeywell and with the University of Arizona (Dr. Paul Blowers). Several life cycle analysis (LCA) models were developed within the GREET Model and SimaPro 7.3 software to quantitatively assess the environment viability and sustainability of algal fuel processes. The baseline GREET Harmonized algae life cycle was expanded and replicated in SimaPro software, important differences in emission factors between GREET/E-Grid database and SimaPro/Ecoinvent database were compared, and adjustments were made to the SimaPro analyses. The results indicated that in most cases SimaPro has a higher emission penalty for inputs of electricity, chemicals, and other materials to the algae biofuels life cycle. A system-wide model of algae life cycle was made starting with preliminary data from the literature, and then progressed to detailed analyses based on inputs from all NAABB research areas, and finally several important scenarios in the algae life cycle were investigated as variations to the baseline scenario. Scenarios include conversion to jet fuel instead of biodiesel or renewable diesel, impacts of infrastructure for algae cultivation, co-product allocation methodology, and different usage of lipid-extracted algae (LEA). The infrastructure impact of algae cultivation is minimal compared to the overall life cycle. However, in the scenarios investigating LEA usage for animal feed instead of internal recycling for energy use and nutrient recovery the results reflect the high potential variability in LCA results. Calculated life cycle GHG values for biofuel production scenarios where LEA is used as animal feed ranged from a 55% reduction to 127% increase compared to the GREET baseline scenario depending on the choice of feed meal. Different allocation methods also affect LCA results significantly. Four novel harvesting technologies and two extraction technologies provided by the NAABB internal report have been analysis using SimaPro LCA software. The results indicated that a combination of acoustic extraction and acoustic harvesting technologies show the most promising result of all combinations to optimize the extraction of algae oil from algae. These scenario evaluations provide important insights for consideration when planning for the future of an algae-based biofuel industry.
Resumo:
Since the advent of automobiles, alcohol has been considered a possible engine fuel1,2. With the recent increased concern about the high price of crude oil due to fluctuating supply and demand and environmental issues, interest in alcohol based fuels has increased2,3. However, using pure alcohols or blends with conventional fuels in high percentages requires changes to the engine and fuel system design2. This leads to the need for a simple and accurate conventional fuels-alcohol blends combustion models that can be used in developing parametric burn rate and knock combustion models for designing more efficient Spark Ignited (SI) engines. To contribute to this understanding, numerical simulations were performed to obtain detailed characteristics of Gasoline-Ethanol blends with respect to Laminar Flame Speed (LFS), autoignition and Flame-Wall interactions. The one-dimensional premixed flame code CHEMKIN® was applied to simulate the burning velocity and autoignition characteristics using the freely propagating model and closed homogeneous reactor model respectively. Computational Fluid Dynamics (CFD) was used to obtain detailed flow, temperature, and species fields for Flame-wall interactions. A semi-detailed validated chemical kinetic model for a gasoline surrogate fuel developed by Andrae and Head4 was used for the study of LFS and Autoignition. For the quenching study, a skeletal chemical kinetic mechanism of gasoline surrogate, having 50 species and 174 reactions was used. The surrogate fuel was defined as a mixture of pure n-heptane, isooctane, and toluene. For LFS study, the ethanol volume fraction was varied from 0 to 85%, initial pressure from 4 to 8 bar, initial temperature from 300 to 900K, and dilution from 0 to 32%. Whereas for Autoignition study, the ethanol volume fraction was varied between 0 to 85%, initial pressure was varied between 20 to 60 bar, initial temperature was varied between 800 to 1200K, and the dilution was varied between 0 to 32% at equivalence ratios of 0.5, 1.0 and 1.5 to represent the in-cylinder conditions of a SI engine. For quenching study three Ethanol blends, namely E0, E25 and E85 are described in detail at an initial pressure of 8 atm and 17 atm. Initial wall temperature was taken to be 400 K. Quenching thicknesses and heat fluxes to the wall were computed. The laminar flame speed was found to increase with ethanol concentration and temperature but decrease with pressure and dilution. The autoignition time was found to increase with ethanol concentration at lower temperatures but was found to decrease marginally at higher temperatures. The autoignition time was also found to decrease with pressure and equivalence ratio but increase with dilution. The average quenching thickness was found to decrease with an increase in Ethanol concentration in the blend. Heat flux to the wall increased with increase in ethanol percentage in the blend and at higher initial pressures. Whereas the wall heat flux decreased with an increase in dilution. Unburned Hydrocarbon (UHC) and CO % was also found to decrease with ethanol concentration in the blend.
Resumo:
This thesis attempts to understand why people adopt or reject individual-use renewable energy technologies (IURET). I used factors from Everett Rogers' Diffusion of Innovation Theory to understand how people's perceptions towards the characteristics of a given IURET (such as price, compatibility, complexity, etc.), the characteristics of the individual adopter (such as innovativeness and environmental awareness), and the communication network (inter-personal communications and mass media) can influence adoption. An online questionnaire was sent to 101randomly selected Michigan households (using random digit dialing) to ask people whether or not they had adopted at least one IURET and to assess the above-mentioned factors from Rogers' theory. Data analysis was then conducted in SPSS using Chi-squared and binary logistic regression to determine the relationship between adoption behaviors (the dependent variable) and the factors from Rogers' theory (the independent variables) while controlling for education. The results show that Rogers' factors of price and observability and the control variable of education were all significant in explaining adoption but the other factors of Rogers' theory were not. For example, if individuals perceive the price of IURET to be reasonable or if they observe their neighbors using these technologies, then they are more likely to adopt. These results indicate that, if we want to promote greater adoption of IURET, we should focus our efforts on making the price of IURET more affordable through incentives and other mechanisms. Adopters should also be given some form of reward if they provide free demonstrations of their IURET in use to their neighbors to take advantage of the observability effects.
Resumo:
Most recently discussion about the optimal treatment for different subsets of patients suffering from coronary artery disease has re-emerged, mainly because of the uncertainty caused by doctors and patients regarding the phenomenon of unpredictable early and late stent thrombosis. Surgical revascularization using multiple arterial bypass grafts has repeatedly proven its superiority compared to percutaneous intervention techniques, especially in patients suffering from left main stem disease and coronary 3-vessels disease. Several prospective randomized multicenter studies comparing early and mid-term results following PCI and CABG have been really restrictive, with respect to patient enrollment, with less than 5% of all patients treated during the same time period been enrolled. Coronary artery bypass grafting allows the most complete revascularization in one session, because all target coronary vessels larger than 1 mm can be bypassed in their distal segments. Once the patient has been turn-off for surgery, surgeons have to consider the most complete arterial revascularization in order to decrease the long-term necessity for re-revascularization; for instance patency rate of the left internal thoracic artery grafted to the distal part left anterior descending artery may be as high as 90-95% after 10 to 15 years. Early mortality following isolated CABG operation has been as low as 0.6 to 1% in the most recent period (reports from the University Hospital Berne and the University Hospital of Zurich); beside these excellent results, the CABG option seems to be less expensive than PCI with time, since the necessity for additional PCI is rather high following initial PCI, and the price of stent devices is still very high, particularly in Switzerland. Patients, insurance and experts in health care should be better and more honestly informed concerning the risk and costs of PCI and CABG procedures as well as about the much higher rate of subsequent interventions following PCI. Team approach for all patients in whom both options could be offered seems mandatory to avoid unbalanced information of the patients. Looking at the recent developments in transcatheter valve treatments, the revival of cardiological-cardiosurgical conferences seems to a good option to optimize the cooperation between the two medical specialties: cardiology and cardiac surgery.
Resumo:
A feeding trial was conducted with 870-lb steers fed 137 days to evaluate replacing cracked corn with dry and wet distillers grains with solubles (DGS) as feed for finishing cattle. Dry DGS was evaluated at 16% of diet dry matter. Wet DGS (WDGS) was evaluated at 14.6%, 26.2%, and 37.5% of diet dry matter. Control diets were supplemented with urea or a combination of urea and soybean meal. Feeding 16% dry DGS or 14.6% wet DGS increased rate of gain and tended to increase carcass fatness. Increasing the amount of wet DGS in the diet decreased feed intake, reduced gain, and improved feed conversion. The calculated net energy for gain values for dry and wet DGS were .92 and 1.5 times the energy value of corn grain. Economic returns declined slightly as the percentage of wet DGS increased in the diet, but remained above the two diets without DGS. The average benefits from feeding wet DGS averaged $25, $21, and $19 per head for steers fed 14.6%, 26.2%, and 35.7%, respectively, based on a formula price for wet DGS related to price of corn and including a charge for transportation of the wet feed.
Resumo:
An experiment was conducted using 95 Continental crossbred steers. The cattle were sorted by ultrasound 160 days before slaughter into a low backfat group (Low BF) and a higher backfat group (High BF). Half of the Low BF and half of the High BF were implanted whereas the other halves were not. Data from the experiment were used in two hypothetical markets. One market was a high yield beef program (HY) that did not allow the use of implants. The second market was a commodity beef program (CM) that allowed the use of implants. The cattle were priced as an unsorted group (ALL) and two sorted groups (Low BF and High BF) within the HY (non-implanted) and CM (implanted) markets. The CM program had a base price of $1.05/lb hot carcass weight (HCW) with a $0.15/lb HCW discount for quality grade (QG) Select and a $0.20/lb HCW discount for yield grade (YG) 4. The HY program used a base price of $1.07/lb HCW with premiums ($/lb HCW) paid for YG £ .9 (.15), 1.0 - 1.4 (.10), and 1.5 - 1.9 (.03). The carcasses were discounted ($/lb HCW) for YG 2.5 - 2.9 (.03), 3.0 - 3.9 (.15), and ³ 4.0 (.35). This data set provides good evidence that the end point at which to sell a group of cattle depends on the particular market. Sorting had an economic advantage over ALL in the HY Low BF and the CM High BF groups. The HY High BF cattle should have been sold sooner due to the discounts recieved for increased YG. The increased YG was directly affected by an increase in BF. Furthermore, the CM Low BF group should have been fed longer to increase the number of carcasses grading Choice.
Resumo:
This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.
Resumo:
In land systems, equitably managing trade-offs between planetary boundaries and human development needs represents a grand challenge in sustainability oriented initiatives. Informing such initiatives requires knowledge about the nexus between land use, poverty, and environment. This paper presents results from Lao PDR, where we combined nationwide spatial data on land use types and the environmental state of landscapes with village-level poverty indicators. Our analysis reveals two general but contrasting trends. First, landscapes with paddy or permanent agriculture allow a greater number of people to live in less poverty but come at the price of a decrease in natural vegetation cover. Second, people practising extensive swidden agriculture and living in intact environments are often better off than people in degraded paddy or permanent agriculture. As poverty rates within different landscape types vary more than between landscape types, we cannot stipulate a land use–poverty–environment nexus. However, the distinct spatial patterns or configurations of these rates point to other important factors at play. Drawing on ethnicity as a proximate factor for endogenous development potentials and accessibility as a proximate factor for external influences, we further explore these linkages. Ethnicity is strongly related to poverty in all land use types almost independently of accessibility, implying that social distance outweighs geographic or physical distance. In turn, accessibility, almost a precondition for poverty alleviation, is mainly beneficial to ethnic majority groups and people living in paddy or permanent agriculture. These groups are able to translate improved accessibility into poverty alleviation. Our results show that the concurrence of external influences with local—highly contextual—development potentials is key to shaping outcomes of the land use–poverty–environment nexus. By addressing such leverage points, these findings help guide more effective development interventions. At the same time, they point to the need in land change science to better integrate the understanding of place-based land indicators with process-based drivers of land use change.
Resumo:
We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.
Resumo:
This chapter provides a detailed discussion of the evidence on housing and mortgage lending discrimination, as well as the potential impacts of such discrimination on minority outcomes like homeownership and neighborhood environment. The paper begins by discussing conceptual issues surrounding empirical analyses of discrimination including explanations for why discrimination takes place, defining different forms of discrimination, and the appropriate interpretation of observed racial and ethnic differences in treatment or outcomes. Next, the paper reviews evidence on housing market discrimination starting with evidence of segregation and price differences in the housing market and followed by direct evidence of discrimination by real estate agents in paired testing studies. Finally, mortgage market discrimination and barriers in access to mortgage credit are discussed. This discussion begins with an assessment of the role credit barriers play in explaining racial and ethnic differences in homeownership and follows with discussions of analyses of underwriting and the price of credit based on administrative and private sector data sources including analyses of the subprime market. The paper concludes that housing discrimination has declined especially in the market for owner-occupied housing and does not appear to play a large role in limiting the neighborhood choices of minority households or the concentration of minorities into central cities. On the other hand, the patterns of racial centralization and lower home ownership rates of African-Americans appear to be related to each other, and lower minority homeownership rates are in part attributable to barriers in the market for mortgage credit. The paper presents considerable evidence of racial and ethnic differences in mortgage underwriting, as well as additional evidence suggesting these differences may be attributable to differential provision of coaching, assistance, and support by loan officers. At this point, innovation in loan products, the shift towards risk based pricing, and growth of the subprime market have not mitigated the role credit barriers play in explaining racial and ethnic differences in homeownership. Further, the growth of the subprime lending industry appears to have segmented the mortgage market in terms of geography leading to increased costs of relying on local/neighborhood sources of mortgage credit and affecting the integrity of many low-income minority neighborhoods through increased foreclosure rates.
Resumo:
After the collapse of the centralized Soeharto regime, deforestation caused by over-logging accelerated. To tackle this problem, an IMF/World Bank-led forestry sector reform program adopted a market-friendly approach involving the resumption of round wood exports and raising of the resource rent fee, with the aim to stop rent accumulation by plywood companies, which had enjoyed a supply of round wood at privileged prices. The Indonesian government, for its part, decentralized the forest concession management system to provide incentives for local governments and communities to carry out sustainable forest management. However, neither policy reform worked effectively. The round wood export ban was reimposed and the forest management system centralized again with cooperation from a newly funded industry-led institution. In the midst of the confusion surrounding the policy reversal, the gap between the price of round wood in international and domestic markets failed to contract, although rent allocations to plywood industries were reduced during 1998-2003. The rents were not collected properly by the government, but accumulated unexpectedly in the hands of players in the black market for round wood.
Resumo:
Introduction : The source and deployment of finance are central issues in economic development. Since 1966, when the Soeharto Administration was inaugurated, Indonesian economic development has relied on funds in the form of aid from international organizations and foreign countries. After the 1990s, a further abundant inflow of capital sustained a rapid economic development. Foreign funding was the basis of Indonesian economic growth. This paper will describe the mechanism for allocating funds in the Indonesian economy. It will identify the problems this mechanism generated in the Indonesian experience, and it will attempt to explain why there was a collapse of the financial system in the wake of the Asian Currency Crisis of 1997. History of the Indonesian Financial system The year 1966 saw the emergence of commercial banks in Indonesia. It can be said that before 1966 a financial system hardly existed, a fact commonly attributed to economic disruptions like the consecutive runs of fiscal deficit and hyperinflation under the Soekarno Administration. After 1996, with the inauguration of Soeharto, a regulatory system of financial legislation, e.g. central banking law and banking regulation, was introduced and implemented, and the banking sector that is the basis of the current financial system in Indonesia was built up. The Indonesian financial structure was significantly altered at the first financial reform of 1983. Between 1966 and 1982, the banking sector consisted of Bank Indonesia (the Central Bank) and the state-owned banks. There was also a system for distributing the abundant public revenue derived from the soaring oil price of the 1970s. The public finance distribution function, incorporated in Indonesian financial system, changed after the successive financial reforms of 1983 and 1988, when there was a move away from the monopoly-market style dominated by state-owned banks (which was a system of public finance distribution that operated at the discretion of the government) towards a modern market mechanism. The five phases of development The Indonesian financial system developed in five phases between 1966 and the present time. The first period (1966-72) was its formative period, the second (1973-82) its policy based finance period under soaring oil prices, the third (1983-91) its financial-reform period, the fourth (1992-97) its period of expansion, and the fifth (1998-) its period of financial restructuring. The first section of this paper summarizes the financial policies operative during each of the periods identified above. In the second section changes to the financial sector in response to policies are examined, and an analysis of these changes shows that an important development of the financial sector occurred during the financial reform period. In the third section the focus of analysis shifts from the general financial sector to particular commercial banks’ performances. In the third section changes in commercial banks’ lending and fund-raising behaviour after the 1990s are analysed by comparing several banking groups in terms of their ownership and foundation time. The last section summarizes the foregoing analyses and examines the problems that remain in the Indonesian financial sector, which is still undergoing restructuring.