852 resultados para Multi-Equation Income Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pénzügy kutatócsoport a TÁMOP-4.2.1.B-09/1/KMR-2010-0005 azonosítójú projektjében igen szerteágazó elemzési munkát végzett. Rámutattunk, hogy a különböző szintű gazdasági szereplők megnövekedett tőkeáttétele egyértelműen a rendszerkockázat növekedéséhez vezet, hiszen nő az egyes szereplők csődjének valószínűsége. Ha a tőkeáttételt eltérő mértékben és ütemben korlátozzák az egyes szektorokban, országokban akkor a korlátozást később bevezető szereplők egyértelműen versenyelőnyhöz jutnak. Az egyes pénzügyi intézmények tőkeallokációját vizsgálva kimutattuk, hogy a különféle divíziók közt mindig lehetséges a működés fedezetésül szolgáló tőkét (kockázatot) úgy felosztani, hogy a megállapodás felmondás egyik érintettnek se álljon érdekében. Ezt azonban nem lehet minden szempontból igazságosan megtenni, így egyes üzletágak versenyhátrányba kerülhetnek, ha a konkurens piaci szereplők az adott tevékenységet kevésbé igazságtalanul terhelték meg. Kimutattunk, hogy az egyes nyugdíjpénztárak befektetési tevékenységének eredményességére nagy hatással van a magánnyugdíjpénztárak tevékenységének szabályozása. Ezek a jogszabályok a társadalom hosszú távú versenyképességére vannak hatással. Rámutattunk arra is, hogy a gazdasági válság előtt a hazai bankok sem voltak képesek ügyfeleik kockázatviselő képességét helyesen megítélni, ráadásul jutalékrendszerük nem is tette ebben érdekelté azokat. Számos vizsgálatunk foglalkozott a magyar vállalatok versenyképességének alakulásával is. Megvizsgáltuk a különféle adónemek, árfolyamkockázatok és finanszírozási politikák versenyképességet befolyásoló hatását. Külön kutatás vizsgálta a kamatlábak ingadozásának és az hitelekhez kapcsolódó eszközfedezet meglétének vállalati értékre gyakorolt hatásait. Rámutattunk a nemfizetés növekvő kockázatára, és áttekintettük a lehetséges és a ténylegesen alkalmazott kezelési stratégiákat is. Megvizsgáltuk azt is, hogy a tőzsdei cégek tulajdonosai miként használják ki az osztalékfizetéshez kapcsolódó adóoptimalizálási lehetőségeket. Gyakorlati piaci tapasztalataik alapján az adóelkerülő kereskedést a befektetők a részvények egy jelentős részénél végrehajtják. Külön kutatás foglakozott a szellemi tőke hazai vállalatoknál játszott szerepéről. Ez alapján a cégek a problémát 2009-ben lényegesen magasabb szakértelemmel kezelték, mint öt esztendővel korábban. Rámutattunk arra is, hogy a tulajdonosi háttér lényeges hatást gyakorolhat arra, ahogyan a cégek célrendszerüket felépítik, illetve ahogy az intellektuális javakra tekintenek. _____ The Finance research team has covered a wide range of research fields while taking part at project TÁMOP-4.2.1.B-09/1/KMR-2010-0005. It has been shown that the increasing financial gearing at the different economic actors clearly leads to growth in systematic risk as the probability of bankruptcy climbs upwards. Once the leverage is limited at different levels and at different points in time for the different sectors, countries introducing the limitations later gain clearly a competitive advantage. When investigating the leverage at financial institutions we found that the capital requirement of the operation can always be divided among divisions so that none of them would be better of with cancelling the cooperation. But this cannot be always done fairly from all point of view meaning some of the divisions may face a competitive disadvantage if competitors charge their similar division less unfairly. Research has also shown that the regulation of private pension funds has vital effect on the profitability of the investment activity of the funds. These laws and regulations do not only affect the funds themselves but also the competitiveness of the whole society. We have also fund that Hungarian banks were unable to estimate correctly the risk taking ability of their clients before the economic crisis. On the top of that the bank were not even interested in that due to their commission based income model. We also carried out several research on the competitiveness of the Hungarian firms. The effect of taxes, currency rate risks, and financing policies on competitiveness has been analysed in detail. A separate research project was dedicated to the effect of the interest rate volatility and asset collaterals linked to debts on the value of the firm. The increasing risk of non-payment has been underlined and we also reviewed the adequate management strategies potentially available and used in real life. We also investigated how the shareholders of listed companies use the tax optimising possibilities linked to dividend payments. Based on our findings on the Hungarian markets the owners perform the tax evading trades in case of the most shares. A separate research has been carried out on the role played by intellectual capital. After that the Hungarian companies dealt with the problem in 2009 with far higher proficiency than five years earlier. We also pointed out that the ownership structure has a considerable influence on how firms structure their aims and view their intangible assets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examined the relations among previously identified risk and protective variables associated with traumatic exposure and evaluated a model of resilience to traumatic events among Latino youth prior to traumatic exposure using structural equation modeling. Model tests were pursued in the context of Full Information Maximum Likelihood (FIML) methods as implemented in Mplus. The model evaluated the role of the following variables: (a) intervening life events; (b) child characteristics; (c) social support from significant others; and (d) children's coping. Data were collected from 181 Latino youth (M age = 9.22, SD = 1.38; 49.0% female) participants. Data analyses revealed that children's perceived available social support and use of coping strategies predicted low state anxiety following exposure to cues of disaster. Life events and preexisting depression symptoms did not significantly predict social support and coping, whereas preexisting anxiety was a significant predictor of perceived social support. This study represents an important initial step towards establishing and empirically evaluating a resilience model. Implications for preparedness interventions and a framework for the etiology of resilient reactions to disaster exposure are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Léon Walras (1874) already had realized that his neo-classical general equilibrium model could not accommodate autonomous investment. Sen analysed the same issue in a simple, one-sector macroeconomic model of a closed economy. He showed that fixing investment in the model, built strictly on neo-classical assumptions, would make the system overdetermined, thus, one should loosen some neo-classical condition of competitive equilibrium. He analysed three not neo-classical “closure options”, which could make the model well determined in the case of fixed investment. Others later extended his list and it showed that the closure dilemma arises in the more complex computable general equilibrium (CGE) models as well, as does the choice of adjustment mechanism assumed to bring about equilibrium at the macro level. By means of numerical models, it was also illustrated that the adopted closure rule can significantly affect the results of policy simulations based on a CGE model. Despite these warnings, the issue of macro closure is often neglected in policy simulations. It is, therefore, worth revisiting the issue and demonstrating by further examples its importance, as well as pointing out that the closure problem in the CGE models extends well beyond the problem of how to incorporate autonomous investment into a CGE model. Several closure rules are discussed in this paper and their diverse outcomes are illustrated by numerical models calibrated on statistical data. First, the analyses is done in a one-sector model, similar to Sen’s, but extended into a model of an open economy. Next, the same analyses are repeated using a fully-fledged multisectoral CGE model, calibrated on the same statistical data. Comparing the results obtained by the two models it is shown that although, using the same closure option, they generate quite similar results in terms of the direction and – to a somewhat lesser extent – of the magnitude of change in the main macro variables, the predictions of the multi-sectoral CGE model are clearly more realistic and balanced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resource management policies are frequently designed and planned to target specific needs of particular sectors, without taking into account the interests of other sectors who share the same resources. In a climate of resource depletion, population growth, increase in energy demand and climate change awareness, it is of great importance to promote the assessment of intersectoral linkages and, by doing so, understand their effects and implications. This need is further augmented when common use of resources might not be solely relevant at national level, but also when the distribution of resources ranges over different nations. This dissertation focuses on the study of the energy systems of five south eastern European countries, which share the Sava River Basin, using a water-food(agriculture)-energy nexus approach. In the case of the electricity generation sector, the use of water is essential for the integrity of the energy systems, as the electricity production in the riparian countries relies on two major technologies dependent on water resources: hydro and thermal power plants. For example, in 2012, an average of 37% of the electricity production in the SRB countries was generated by hydropower and 61% in thermal power plants. Focusing on the SRB, in terms of existing installed capacities, the basin accommodates close to a tenth of all hydropower capacity while providing water for cooling to 42% of the net capacity of thermal power currently in operation in the basin. This energy-oriented nexus study explores the dependency on the basin’s water resources of the energy systems in the region for the period between 2015 and 2030. To do so, a multi-country electricity model was developed to provide a quantification ground to the analysis, using the open-source software modelling tool OSeMOSYS. Three main areas are subject to analysis: first, the impact of energy efficiency and renewable energy strategies in the electricity generation mix; secondly, the potential impacts of climate change under a moderate climate change projection scenario; and finally, deriving from the latter point, the cumulative impact of an increase in water demand in the agriculture sector, for irrigation. Additionally, electricity trade dynamics are compared across the different scenarios under scrutiny, as an effort to investigate the implications of the aforementioned factors in the electricity markets in the region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural field models of firing rate activity typically take the form of integral equations with space-dependent axonal delays. Under natural assumptions on the synaptic connectivity we show how one can derive an equivalent partial differential equation (PDE) model that properly treats the axonal delay terms of the integral formulation. Our analysis avoids the so-called long-wavelength approximation that has previously been used to formulate PDE models for neural activity in two spatial dimensions. Direct numerical simulations of this PDE model show instabilities of the homogeneous steady state that are in full agreement with a Turing instability analysis of the original integral model. We discuss the benefits of such a local model and its usefulness in modeling electrocortical activity. In particular we are able to treat "patchy'" connections, whereby a homogeneous and isotropic system is modulated in a spatially periodic fashion. In this case the emergence of a "lattice-directed" traveling wave predicted by a linear instability analysis is confirmed by the numerical simulation of an appropriate set of coupled PDEs. Article published and (c) American Physical Society 2007

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different types of base fluids, such as water, engine oil, kerosene, ethanol, methanol, ethylene glycol etc. are usually used to increase the heat transfer performance in many engineering applications. But these conventional heat transfer fluids have often several limitations. One of those major limitations is that the thermal conductivity of each of these base fluids is very low and this results a lower heat transfer rate in thermal engineering systems. Such limitation also affects the performance of different equipments used in different heat transfer process industries. To overcome such an important drawback, researchers over the years have considered a new generation heat transfer fluid, simply known as nanofluid with higher thermal conductivity. This new generation heat transfer fluid is a mixture of nanometre-size particles and different base fluids. Different researchers suggest that adding spherical or cylindrical shape of uniform/non-uniform nanoparticles into a base fluid can remarkably increase the thermal conductivity of nanofluid. Such augmentation of thermal conductivity could play a more significant role in enhancing the heat transfer rate than that of the base fluid. Nanoparticles diameters used in nanofluid are usually considered to be less than or equal to 100 nm and the nanoparticles concentration usually varies from 5% to 10%. Different researchers mentioned that the smaller nanoparticles concentration with size diameter of 100 nm could enhance the heat transfer rate more significantly compared to that of base fluids. But it is not obvious what effect it will have on the heat transfer performance when nanofluids contain small size nanoparticles of less than 100 nm with different concentrations. Besides, the effect of static and moving nanoparticles on the heat transfer of nanofluid is not known too. The idea of moving nanoparticles brings the effect of Brownian motion of nanoparticles on the heat transfer. The aim of this work is, therefore, to investigate the heat transfer performance of nanofluid using a combination of smaller size of nanoparticles with different concentrations considering the Brownian motion of nanoparticles. A horizontal pipe has been considered as a physical system within which the above mentioned nanofluid performances are investigated under transition to turbulent flow conditions. Three different types of numerical models, such as single phase model, Eulerian-Eulerian multi-phase mixture model and Eulerian-Lagrangian discrete phase model have been used while investigating the performance of nanofluids. The most commonly used model is single phase model which is based on the assumption that nanofluids behave like a conventional fluid. The other two models are used when the interaction between solid and fluid particles is considered. However, two different phases, such as fluid and solid phases is also considered in the Eulerian-Eulerian multi-phase mixture model. Thus, these phases create a fluid-solid mixture. But, two phases in the Eulerian-Lagrangian discrete phase model are independent. One of them is a solid phase and the other one is a fluid phase. In addition, RANS (Reynolds Average Navier Stokes) based Standard κ-ω and SST κ-ω transitional models have been used for the simulation of transitional flow. While the RANS based Standard κ-ϵ, Realizable κ-ϵ and RNG κ-ϵ turbulent models are used for the simulation of turbulent flow. Hydrodynamic as well as temperature behaviour of transition to turbulent flows of nanofluids through the horizontal pipe is studied under a uniform heat flux boundary condition applied to the wall with temperature dependent thermo-physical properties for both water and nanofluids. Numerical results characterising the performances of velocity and temperature fields are presented in terms of velocity and temperature contours, turbulent kinetic energy contours, surface temperature, local and average Nusselt numbers, Darcy friction factor, thermal performance factor and total entropy generation. New correlations are also proposed for the calculation of average Nusselt number for both the single and multi-phase models. Result reveals that the combination of small size of nanoparticles and higher nanoparticles concentrations with the Brownian motion of nanoparticles shows higher heat transfer enhancement and thermal performance factor than those of water. Literature suggests that the use of nanofluids flow in an inclined pipe at transition to turbulent regimes has been ignored despite its significance in real-life applications. Therefore, a particular investigation has been carried out in this thesis with a view to understand the heat transfer behaviour and performance of an inclined pipe under transition flow condition. It is found that the heat transfer rate decreases with the increase of a pipe inclination angle. Also, a higher heat transfer rate is found for a horizontal pipe under forced convection than that of an inclined pipe under mixed convection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: In contrast to other countries, surgery still represents the common invasive treatment for varicose veins in Germany. However, radiofrequency ablation, e.g. ClosureFast, becomes more and more popular in other countries due to potential better results and reduced side effects. This treatment option may cause less follow-up costs and is a more convenient procedure for patients, which could justify an introduction in the statutory benefits catalogue. Therefore, we aim at calculating the budget impact of a general reimbursement of ClosureFast in Germany. Methods: To assess the budget impact of including ClosureFast in the German statutory benefits catalogue, we developed a multi-cohort Markov model and compared the costs of a “World with ClosureFast” with a “World without ClosureFast” over a time horizon of five years. To address the uncertainty of input parameters, we conducted three different types of sensitivity analysis (one-way, scenario, probabilistic). Results: In the Base Case scenario, the introduction of the ClosureFast system for the treatment of varicose veins saves costs of about 19.1 Mio. € over a time horizon of five years in Germany. However, the results scatter in the sensitivity analyses due to limited evidence of some key input parameters. Conclusions: Results of the budget impact analysis indicate that a general reimbursement of ClosureFast has the potential to be cost-saving in the German Statutory Health Insurance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Tropics, continental shelves governed by western boundary currents are considered to be among the least productive ocean margins in the world, unless eddy-induced shelf-edge upwelling becomes significant. The eastern Brazilian shelf in the Southwest Atlantic is one of these, and since the slight nutrient input from continental sources is extremely oligotrophic. It is characterized by complex bathymetry with the presence of shallow banks and seamounts. In this work, a full three-dimensional nonlinear primitive equation ocean model is used to demonstrate that the interaction of tidal currents and the bottom topography of the east Brazil continental shelf is capable of producing local upwelling of South Atlantic Central Water, bringing nutrients up from deep waters to the surface layer. Such upper layer enrichment is found to be of significance in increasing local primary productivity. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The model presented allows simulating the pesticide concentration in fruit trees and estimating the pesticide bioconcentration factor in fruits of woody species. The model allows estimating the pesticide uptake by plants through the water transpiration stream and also the time in which maximum pesticide concentration occur in the fruits. The equation proposed presents the relationships between bioconcentration factor (BCF) and the following variables: plant water transpiration volume (Q), pesticide transpiration stream concentration factor (TSCF), pesticide stem-water partition coefficient (KWood,w), stem dry biomass (M) and pesticide dissipation rate in the soil-plant system (kEGS). The modeling started and was developed from a previous model ?Fruit Tree Model? (FTM), reported by Trapp and collaborators in 2003, to which was added the hypothesis that the pesticide degradation in the soil follows a first order kinetic equation. The model fitness was evaluated through the sensitivity analysis of the pesticide BCF values in fruits with respect to the model entry data variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A reversible linear master equation model is presented for pressure- and temperature-dependent bimolecular reactions proceeding via multiple long-lived intermediates. This kinetic treatment, which applies when the reactions are measured under pseudo-first-order conditions, facilitates accurate and efficient simulation of the time dependence of the populations of reactants, intermediate species and products. Detailed exploratory calculations have been carried out to demonstrate the capabilities of the approach, with applications to the bimolecular association reaction C3H6 + H reversible arrow C3H7 and the bimolecular chemical activation reaction C2H2 +(CH2)-C-1--> C3H3+H. The efficiency of the method can be dramatically enhanced through use of a diffusion approximation to the master equation, and a methodology for exploiting the sparse structure of the resulting rate matrix is established.