885 resultados para empirical shell model
Resumo:
Snow cover is an important control in mountain environments and a shift of the snow-free period triggered by climate warming can strongly impact ecosystem dynamics. Changing snow patterns can have severe effects on alpine plant distribution and diversity. It thus becomes urgent to provide spatially explicit assessments of snow cover changes that can be incorporated into correlative or empirical species distribution models (SDMs). Here, we provide for the first time a with a lower overestimation comparison of two physically based snow distribution models (PREVAH and SnowModel) to produce snow cover maps (SCMs) at a fine spatial resolution in a mountain landscape in Austria. SCMs have been evaluated with SPOT-HRVIR images and predictions of snow water equivalent from the two models with ground measurements. Finally, SCMs of the two models have been compared under a climate warming scenario for the end of the century. The predictive performances of PREVAH and SnowModel were similar when validated with the SPOT images. However, the tendency to overestimate snow cover was slightly lower with SnowModel during the accumulation period, whereas it was lower with PREVAH during the melting period. The rate of true positives during the melting period was two times higher on average with SnowModel with a lower overestimation of snow water equivalent. Our results allow for recommending the use of SnowModel in SDMs because it better captures persisting snow patches at the end of the snow season, which is important when modelling the response of species to long-lasting snow cover and evaluating whether they might survive under climate change.
Resumo:
Due to the existence of free software and pedagogical guides, the use of Data Envelopment Analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run their own efficiency analysis. Within DEA, several alternative models allow for an environmental adjustment. Four alternative models, each user-friendly and easily accessible to practitioners and decision makers, are performed using empirical data of 90 primary schools in the State of Geneva, Switzerland. Results show that the majority of alternative models deliver divergent results. From a political and a managerial standpoint, these diverging results could lead to potentially ineffective decisions. As no consensus emerges on the best model to use, practitioners and decision makers may be tempted to select the model that is right for them, in other words, the model that best reflects their own preferences. Further studies should investigate how an appropriate multi-criteria decision analysis method could help decision makers to select the right model.
Resumo:
Prediction of the stock market valuation is a common interest to all market participants. Theoretically sound market valuation can be achieved by discounting future earnings of equities to present. Competing valuation models seek to find variables that affect the equity market valuation in a way that the market valuation can be explained and also variables that could be used to predict market valuation. In this paper we test the contemporaneous relationship between stock prices, forward looking earnings and long-term government bond yields. We test this so-called Fed model in a long- and short-term time series analysis. In order to test the dynamics of the relationship, we use the cointegration framework. The data used in this study spans over four decades of various market conditions between 1964-2007, using data from United States. The empirical results of our analysis do not give support for the Fed model. We are able to show that the long-term government bonds do not play statistically significant role in this relationship. The effect of forward earnings yield on the stock market prices is significant and thus we suggest the use of standard valuation ratios when trying to predict the future paths of equity prices. Also, changes in the long-term government bond yields do not have significant short-term impact on stock prices.
Resumo:
Electron scattering on unstable nuclei is planned in future facilities of the GSI and RIKEN upgrades. Motivated by this fact, we study theoretical predictions for elastic electron scattering in the N=82, N=50, and N=14 isotonic chains from very proton-deficient to very proton-rich isotones. We compute the scattering observables by performing Dirac partial-wave calculations. The charge density of the nucleus is obtained with a covariant nuclear mean-field model that accounts for the low-energy electromagnetic structure of the nucleon. For the discussion of the dependence of scattering observables at low-momentum transfer on the gross properties of the charge density, we fit Helm model distributions to the self-consistent mean-field densities. We find that the changes shown by the electric charge form factor along each isotonic chain are strongly correlated with the underlying proton shell structure of the isotones. We conclude that elastic electron scattering experiments on isotones can provide valuable information about the filling order and occupation of the single-particle levels of protons.
Resumo:
In the philosophical literature, self-deception is mainly approached through the analysis of paradoxes. Yet, it is agreed that self-deception is motivated by protection from distress. In this paper, we argue, with the help of findings from cognitive neuroscience and psychology, that self-deception is a type of affective coping. First, we criticize the main solutions to the paradoxes of self-deception. We then present a new approach to self-deception. Self-deception, we argue, involves three appraisals of the distressing evidence: (a) appraisal of the strength of evidence as uncertain, (b) low coping potential and (c) negative anticipation along the lines of Damasio's somatic marker hypothesis. At the same time, desire impacts the treatment of flattering evidence via dopamine. Our main proposal is that self-deception involves emotional mechanisms provoking a preference for immediate reward despite possible long-term negative repercussions. In the last part, we use this emotional model to revisit the philosophical paradoxes.
Resumo:
User retention is a major goal for higher education institutions running their teaching and learning programmes online. This is the first investigation into how the senses of presence and flow, together with perceptions about two central elements of the virtual education environment (didactic resource quality and instructor attitude), facilitate the user¿s intention to continue e-learning. We use data collected from a large sample survey of current users in a pure e-learning environment along with objective data about their performance. The results provide support to the theoretical model. The paper further offers practical suggestions for institutions and instructors who aim to provide effective e-learning experiences.
Resumo:
We describe a model-based objects recognition system which is part of an image interpretation system intended to assist autonomous vehicles navigation. The system is intended to operate in man-made environments. Behavior-based navigation of autonomous vehicles involves the recognition of navigable areas and the potential obstacles. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using CEES, the C++ embedded expert system shell developed in the Systems Engineering and Automatic Control Laboratory (University of Girona) as a specific rule-based problem solving tool. It has been especially conceived for supporting cooperative expert systems, and uses the object oriented programming paradigm
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
We examine the scale invariants in the preparation of highly concentrated w/o emulsions at different scales and in varying conditions. The emulsions are characterized using rheological parameters, owing to their highly elastic behavior. We first construct and validate empirical models to describe the rheological properties. These models yield a reasonable prediction of experimental data. We then build an empirical scale-up model, to predict the preparation and composition conditions that have to be kept constant at each scale to prepare the same emulsion. For this purpose, three preparation scales with geometric similarity are used. The parameter N¿D^α, as a function of the stirring rate N, the scale (D, impeller diameter) and the exponent α (calculated empirically from the regression of all the experiments in the three scales), is defined as the scale invariant that needs to be optimized, once the dispersed phase of the emulsion, the surfactant concentration, and the dispersed phase addition time are set. As far as we know, no other study has obtained a scale invariant factor N¿Dα for the preparation of highly concentrated emulsions prepared at three different scales, which covers all three scales, different addition times and surfactant concentrations. The power law exponent obtained seems to indicate that the scale-up criterion for this system is the power input per unit volume (P/V).
Resumo:
This paper examines the extent to which innovative Spanish firms pursue improvements in energy efficiency (EE) as an objective of innovation. The increase in energy consumption and its impact on greenhouse gas emissions justifies the greater attention being paid to energy efficiency and especially to industrial EE. The ability of manufacturing companies to innovate and improve their EE has a substantial influence on attaining objectives regarding climate change mitigation. Despite the effort to design more efficient energy policies, the EE determinants in manufacturing firms have been little studied in the empirical literature. From an exhaustive sample of Spanish manufacturing firms and using a logit model, we examine the energy efficiency determinants for those firms that have innovated. To carry out the econometric analysis, we use panel data from the Community Innovation Survey for the period 2008‐2011. Our empirical results underline the role of size among the characteristics of firms that facilitate energy efficiency innovation. Regarding company behaviour, firms that consider the reduction of environmental impacts to be an important objective of innovation and that have introduced organisational innovations are more likely to innovate with the objective of increasing energy efficiency. Keywords: energy efficiency, corporate targets, innovation, Community Innovation Survey. JEL Classification: Q40, Q55, O31
Resumo:
Social, technological, and economic time series are divided by events which are usually assumed to be random, albeit with some hierarchical structure. It is well known that the interevent statistics observed in these contexts differs from the Poissonian profile by being long-tailed distributed with resting and active periods interwoven. Understanding mechanisms generating consistent statistics has therefore become a central issue. The approach we present is taken from the continuous-time random-walk formalism and represents an analytical alternative to models of nontrivial priority that have been recently proposed. Our analysis also goes one step further by looking at the multifractal structure of the interevent times of human decisions. We here analyze the intertransaction time intervals of several financial markets. We observe that empirical data describe a subtle multifractal behavior. Our model explains this structure by taking the pausing-time density in the form of a superstatistics where the integral kernel quantifies the heterogeneous nature of the executed tasks. A stretched exponential kernel provides a multifractal profile valid for a certain limited range. A suggested heuristic analytical profile is capable of covering a broader region.
Resumo:
The objectives of this Master’s Thesis were to find out what kind of knowledge management strategy would fit best an IT organization that uses ITIL (Information Technology Infrastructure Library) framework for IT Service Management and to create a knowledge management process model to support chosen strategy. The empirical material for this research was collected through qualitative semi-structured interviews of a case organization Stora Enso Corporate IT. The results of the qualitative interviews indicate that codification knowledge management strategy would fit best for the case organization. The knowledge management process model was created based on earlier studies and a literature of knowledge management. The model was evaluated in the interview research and the results showed that the created process model is realistic, useful, and it responds to a real life phenomenon.
Resumo:
Fleurbaey and Maniquet have proposed the criteria of conditional equality and of egalitarian equivalence to assess the equity among individuals in an ordinal setting. Empirical applications are rare and only partially consistent with their framework. We propose a new empirical approach that relies on individual preferences, is consistent with the ordinal criteria and enables to compare them with the cardinal criteria. We estimate a utility function that incorporates individual heterogeneous preferences, obtain ordinal measures of well-being and apply conditional equality and egalitarian equivalence. We then propose two cardinal measures of well-being, that are comparable with the ordinal model, to compute Roemer’s and Van de gaer’s criteria. Finally we compare the characteristics of the worst-off displayed by each criterion. We apply this model to a sample of US micro data and obtain that about 18% of the worst-off are not common to all criteria.
Resumo:
This study explored the ethnic identity among 331 emerging adults (144 mestizos and 187 indigenous) from the Intercultural University of Chiapas (México). Scholars suggest that ethnicity is much more salient for ethnic minority adolescents than for adolescents who are members of the ethnic majority. Our aim was to compare the results of the Multigroup Ethnic Identity Measure (MEIM) between the majority ethnic group and the minority group studied. Specifically, the following hypothesis was examined: adolescents who are members of the ethnic minority group (indigenous) will score significantly higher on ethnic identity than adolescents who are members of the ethnic majority group (mestizos). The results supported these hypothesis. We suggest that the effect of an intercultural educative model could explain these results
Resumo:
The purpose of this study was to develop co-operation between business units of the company operating in graphic industry. The development was done by searching synergy opportunities between these business units. The final aim was to form a business model, which is based on co-operation of these business units.The literature review of this thesis examines synergies and especially the process concerning the search and implementation of synergies. Also the concept of business model and its components are examined. The research was done by using qualitative research method. The main data acquiring method to the empirical part was theme interviews. The data was analyzed using thematisation and content analysis.The results of the study include seven identified possible synergies and a business model, which is based on the co-operation of the business units. The synergy opportunities are evaluated and the implementation order of the synergies is suggested. The presented synergies create the base for the proposed business model.