58 resultados para Univariate Analysis box-jenkins methodology
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
En los últimos 30 años la proliferación de modelos cuantitativos de predicción de la insolvencia empresarial en la literatura contable y financiera ha despertado un gran interés entre los especialistas e investigadores de lamateria. Lo que en un principio fueron unos modelos elaborados con un único objetivo, han derivado en una fuente de investigación constante.En este documento se formula un modelo de predicción de la insolvencia a través de la combinación de diferentes variables cuantitativas extraídas de los estados contables de una muestra de empresas para los años 1994-1997. A través de un procedimiento por etapas se selecciona e interpreta cuáles son las más relevantes en cuanto a aportación de información.Una vez formulado este primer tipo de modelos se busca una alternativa a las variables anteriores a través de la técnica factorial del análisis de componentes principales. Con ella se hace una selección de variables y se aplica, junto conlos ratios anteriores, el análisis univariante. Por último, se comparan los modelos obtenidos y se concluye que aunque la literatura previa ofrece mejores porcentajes de clasificación, los modelos obtenidos a través del análisis decomponentes principales no deben ser rechazados por la claridad en la explicación de las causas que conducen a una empresa a la insolvencia.
Resumo:
En los últimos 30 años la proliferación de modelos cuantitativos de predicción de la insolvencia empresarial en la literatura contable y financiera ha despertado un gran interés entre los especialistas e investigadores de lamateria. Lo que en un principio fueron unos modelos elaborados con un único objetivo, han derivado en una fuente de investigación constante.En este documento se formula un modelo de predicción de la insolvencia a través de la combinación de diferentes variables cuantitativas extraídas de los estados contables de una muestra de empresas para los años 1994-1997. A través de un procedimiento por etapas se selecciona e interpreta cuáles son las más relevantes en cuanto a aportación de información.Una vez formulado este primer tipo de modelos se busca una alternativa a las variables anteriores a través de la técnica factorial del análisis de componentes principales. Con ella se hace una selección de variables y se aplica, junto conlos ratios anteriores, el análisis univariante. Por último, se comparan los modelos obtenidos y se concluye que aunque la literatura previa ofrece mejores porcentajes de clasificación, los modelos obtenidos a través del análisis decomponentes principales no deben ser rechazados por la claridad en la explicación de las causas que conducen a una empresa a la insolvencia.
Resumo:
[spa] Este curso académico 2009-2010 se implanta el nuevo plan de estudios de Farmacia en la Universidad de Barcelona, diseñado según los planteamientos del EEES. Como consecuencia, y por primera vez en la historia de la Facultad de Farmacia de la UB, se imparte una asignatura troncal de cariz galénico en el primer año de la carrera. Esto constituye un nuevo reto para el Grupo de Innovación Docente de Tecnología Farmacéutica (GIDTF), dado que la asignatura Introducción a la Farmacia Galénica se ha de impartir a grandes grupos de estudiantes, al inicio de su carrera, mediante sesiones teóricas de 1,5 h. Excepcionalmente en este curso académico, la asignatura se imparte en el primer semestre y se repite en el segundo. En este trabajo se presenta el planteamiento metodológico presencial diseñado para esta asignatura, apoyado en estrategias no presenciales como foro de debate, recursos on-line, cuestionarios y tareas de autoevaluación a través de la plataforma Moodle del Campus Virtual de la UB, puesto que el equipo docente considera prioritario iniciar al estudiante en el uso de la misma en el primer año de carrera. Se han efectuado encuestas de satisfacción a los estudiantes que se han evaluado, así también como los resultados académicos obtenidos. En el análisis de los puntos fuertes y débiles de la metodología empleada, se han detectado evaluaciones positivas y también aspectos que podrían mejorarse, estableciendo las medidas correctoras adecuadas. En cuanto a los resultados académicos, han sido muy satisfactorios. [eng] This academic year 2009-2010, the new curriculum of Pharmacy according to the premises of the EHEA is started at the University of Barcelona. As a result, for the first time in the history of the Faculty of Pharmacy of UB, an obligatory galenic subject will be given during the first year of the career. This is a new challenge for Teaching Innovation Group of Pharmaceutical Technology (GIDTF), as the subject Introduction to Galenic Pharmacy is given by a team of teachers to large groups of students who began its career, through theoretical sessions of 1.5 h. The subject will be taught exceptionally this academic year in the first semester and repeated in the second. In this paper we present the methodological approach designed to face this subject, supported by virtual strategies as discussion forum, online resources, self-assessment test and work through the platform Moodle of the Virtual Campus UB, as the team considers it a priority to initiate the student in using it in the first year of pharmacy study. Were carried out satisfaction surveys to students and we have evaluated them, as well as academic performance. Through the analysis of the methodology, we detected positive evaluations and areas for improvement that have been used to establish appropriate corrective measures. Academic results have been very satisfactory.
Resumo:
Black-box optimization problems (BBOP) are de ned as those optimization problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program). This paper is focussed on BBOPs that arise in the eld of insurance, and more speci cally in reinsurance problems. In this area, the complexity of the models and assumptions considered to de ne the reinsurance rules and conditions produces hard black-box optimization problems, that must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in BBOP, so new computational paradigms must be applied to solve these problems. In this paper we show the performance of two evolutionary-based techniques (Evolutionary Programming and Particle Swarm Optimization). We provide an analysis in three BBOP in reinsurance, where the evolutionary-based approaches exhibit an excellent behaviour, nding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.
Resumo:
Accomplish high quality of final products in pharmaceutical industry is a challenge that requires the control and supervision of all the manufacturing steps. This request created the necessity of developing fast and accurate analytical methods. Near infrared spectroscopy together with chemometrics, fulfill this growing demand. The high speed providing relevant information and the versatility of its application to different types of samples lead these combined techniques as one of the most appropriated. This study is focused on the development of a calibration model able to determine amounts of API from industrial granulates using NIR, chemometrics and process spectra methodology.
Resumo:
The GS-distribution is a family of distributions that provide an accurate representation of any unimodal univariate continuous distribution. In this contribution we explore the utility of this family as a general model in survival analysis. We show that the survival function based on the GS-distribution is able to provide a model for univariate survival data and that appropriate estimates can be obtained. We develop some hypotheses tests that can be used for checking the underlying survival model and for comparing the survival of different groups.
Resumo:
Case-crossover is one of the most used designs for analyzing the health-related effects of air pollution. Nevertheless, no one has reviewed its application and methodology in this context. Objective: We conducted a systematic review of case-crossover (CCO) designs used to study the relationship between air pollution and morbidity and mortality, from the standpoint of methodology and application.Data sources and extraction: A search was made of the MEDLINE and EMBASE databases.Reports were classified as methodologic or applied. From the latter, the following information was extracted: author, study location, year, type of population (general or patients), dependent variable(s), independent variable(s), type of CCO design, and whether effect modification was analyzed for variables at the individual level. Data synthesis: The review covered 105 reports that fulfilled the inclusion criteria. Of these, 24 addressed methodological aspects, and the remainder involved the design’s application. In the methodological reports, the designs that yielded the best results in simulation were symmetric bidirectional CCO and time-stratified CCO. Furthermore, we observed an increase across time in the use of certain CCO designs, mainly symmetric bidirectional and time-stratified CCO. The dependent variables most frequently analyzed were those relating to hospital morbidity; the pollutants most often studied were those linked to particulate matter. Among the CCO-application reports, 13.6% studied effect modification for variables at the individual level.Conclusions: The use of CCO designs has undergone considerable growth; the most widely used designs were those that yielded better results in simulation studies: symmetric bidirectional and time-stratified CCO. However, the advantages of CCO as a method of analysis of variables at the individual level are put to little use
Resumo:
Does shareholder value orientation lead to shareholder value creation? This article proposes methods to quantify both, shareholder value orientation and shareholder value creation. Through the application of these models it is possible to quantify both dimensions and examine statistically in how far shareholder value orientation explains shareholder value creation. The scoring model developed in this paper allows quantifying the orientation of managers towards the objective to maximize wealth of shareholders. The method evaluates information that comes from the companies and scores the value orientation in a scale from 0 to 10 points. Analytically the variable value orientation is operationalized expressing it as the general attitude of managers toward the objective of value creation, investment policy and behavior, flexibility and further eight value drivers. The value creation model works with market data such as stock prices and dividend payments. Both methods where applied to a sample of 38 blue chip companies: 32 firms belonged to the share index IBEX 35 on July 1st, 1999, one company represents the “new economy” listed in the Spanish New Market as per July 1st, 2001, and 5 European multinational groups formed part of the EuroStoxx 50 index also on July 1st, 2001. The research period comprised the financial years 1998, 1999, and 2000. A regression analysis showed that between 15.9% and 23.4% of shareholder value creation can be explained by shareholder value orientation.
Resumo:
Here we present an approach that allows the identification of the "key" productive sectors responsible for CO2 emission. For this purpose, we develop an input–output methodology from a supply perspective. We focus on the impact of an increase in the value-added of the different productive sectors on total CO2 emissions and we identify the productive sectors responsible for the increase in CO2 emissions when there is an increase in the income of the economy. The approach shows the contribution of the various sectors to CO2 emission from a production perspective and allows us to identify the sectors that deserve more consideration for mitigation policies. This analysis is complementary to the input–output analysis from a demand perspective. The methodology is applied to the Spanish economy.
Resumo:
The objective of this paper is to analyse the economic impacts of alternative water policies implemented in the Spanish production system. The methodology uses two versions of the input-output price model: a competitive formulation and a mark-up formulation. The input-output framework evaluates the impact of water policy measures on production prices, consumption prices, intermediate water demand and private welfare. Our results show that a tax on the water used by sectors considerably reduces the intermediate water demand, and increases the production and consumption prices. On the other hand, according to Jevons' paradox, an improvement in technical efficiency, which leads to a reduction in the water requirements of all sectors and an increase in water production, increases the amount of water consumed. The combination of a tax on water and improved technical efficiency takes the pressure off prices and significantly reduces intermediate water demand. JEL Classification: C67 ; D57 ; Q25. Keywords: Production prices; Consumption prices; Water uses; Water policy; Water taxation.
Resumo:
The aim of the paper is to analyse the economic impact of alternative policies implemented on the energy activities of the Catalan production system. Specifically, we analyse the effects of a tax on intermediate energy uses, a reduction in the final production of energy, and a reduction in intermediate energy uses. The methodology involves two versions of the input-output price model: a competitive price formulation and a mark-up price formulation. The input-output price framework will make it possible to evaluate how the alternative measures modify production prices, consumption prices, private welfare, and intermediate energy uses. The empirical application is for the Catalan economy and uses economic data for the year 2001.
Resumo:
Research in business dynamics has been advancing rapidly in the last years but the translation of the new knowledge to industrial policy design is slow. One striking aspect in the policy area is that although research and analysis do not identify the existence of an specific optimal rate of business creation and business exit, governments everywhere have adopted business start-up support programs with the implicit principle that the more the better. The purpose of this article is to contribute to understand the implications of the available research for policy design. Economic analysis has identified firm heterogeneity as being the most salient characteristic of industrial dynamics, and so a better knowledge of the different types of entrepreneur, their behavior and their specific contribution to innovation and growth would enable us to see into the ‘black box’ of business dynamics and improve the design of appropriate public policies. The empirical analysis performed here shows that not all new business have the same impact on relevant economic variables, and that self-employment is of quite a different economic nature to that of firms with employees. It is argued that public programs should not promote indiscriminate entry but rather give priority to able entrants with survival capacities. Survival of entrants is positively related to their size at birth. Innovation and investment improve the likelihood of survival of new manufacturing start-ups. Investment in R&D increases the risk of failure in new firms, although it improves the competitiveness of incumbents.
Resumo:
Multiplier analysis based upon the information contained in Leontief's inverse is undoubtedly part of the core of the input-output methodology and numerous applications an extensions have been developed that exploit its informational content. Nonetheless there are some implicit theoretical assumptions whose implications have perhaps not been fully assessed. This is the case of the 'excess capacity' assumption. Because of this assumption resources are available as needed to adjust production to new equilibrium states. In real world applications, however, new resources are scarce and costly. Supply constraints kick in and hence resource allocation needs to take them into account to really assess the effect of government policies. Using a closed general equilibrium model that incorporates supply constraints, we perform some simple numerical exercises and proceed to derive a 'constrained' multiplier matrix that can be compared with the standard 'unrestricted' multiplier matrix. Results show that the effectiveness of expenditure policies hinges critically on whether or not supply constraints are considered.
Resumo:
This paper is to examine the proper use of dimensions and curve fitting practices elaborating on Georgescu-Roegen’s economic methodology in relation to the three main concerns of his epistemological orientation. Section 2 introduces two critical issues in relation to dimensions and curve fitting practices in economics in view of Georgescu-Roegen’s economic methodology. Section 3 deals with the logarithmic function (ln z) and shows that z must be a dimensionless pure number, otherwise it is nonsensical. Several unfortunate examples of this analytical error are presented including macroeconomic data analysis conducted by a representative figure in this field. Section 4 deals with the standard Cobb-Douglas function. It is shown that the operational meaning cannot be obtained for capital or labor within the Cobb-Douglas function. Section 4 also deals with economists "curve fitting fetishism". Section 5 concludes thispaper with several epistemological issues in relation to dimensions and curve fitting practices in economics.
Resumo:
One feature of the modern nutrition transition is the growing consumption of animal proteins. The most common approach in the quantitative analysis of this change used to be the study of averages of food consumption. But this kind of analysis seems to be incomplete without the knowledge of the number of consumers. Data about consumers are not usually published in historical statistics. This article introduces a methodological approach for reconstructing consumer populations. This methodology is based on some assumptions about the diffusion process of foodstuffs and the modeling of consumption patterns with a log-normal distribution. This estimating process is illustrated with the specific case of milk consumption in Spain between 1925 and 1981. These results fit quite well with other data and indirect sources available showing that this dietary change was a slow and late process. The reconstruction of consumer population could shed a new light in the study of nutritional transitions.