871 resultados para Panel data probit model
Resumo:
During the first hours after release of petroleum at sea, crude oil hydrocarbons partition rapidly into air and water. However, limited information is available about very early evaporation and dissolution processes. We report on the composition of the oil slick during the first day after a permitted, unrestrained 4.3 m(3) oil release conducted on the North Sea. Rapid mass transfers of volatile and soluble hydrocarbons were observed, with >50% of ≤C17 hydrocarbons disappearing within 25 h from this oil slick of <10 km(2) area and <10 μm thickness. For oil sheen, >50% losses of ≤C16 hydrocarbons were observed after 1 h. We developed a mass transfer model to describe the evolution of oil slick chemical composition and water column hydrocarbon concentrations. The model was parametrized based on environmental conditions and hydrocarbon partitioning properties estimated from comprehensive two-dimensional gas chromatography (GC×GC) retention data. The model correctly predicted the observed fractionation of petroleum hydrocarbons in the oil slick resulting from evaporation and dissolution. This is the first report on the broad-spectrum compositional changes in oil during the first day of a spill at the sea surface. Expected outcomes under other environmental conditions are discussed, as well as comparisons to other models.
Resumo:
This paper analyses the differential impact of human capital, in terms of different levels of schooling, on regional productivity and convergence. The potential existence of geographical spillovers of human capital is also considered by applying spatial panel data techniques. The empirical analysis of Spanish provinces between 1980 and 2007 confirms the positive impact of human capital on regional productivity and convergence, but reveals no evidence of any positive geographical spillovers of human capital. In fact, in some specifications the spatial lag presented by tertiary studies has a negative effect on the variables under consideration.
Resumo:
The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series prior to assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and illustrated. A simulation study is carried out to explore the bias and precision of the estimators and compare them to an analytical procedure matching the data simulation model. The experimental conditions include two data generation models, several degrees of serial dependence, trend, level and/or slope change. The results suggest that the level and slope change estimates provided by the procedure are unbiased for all levels of serial dependence tested and trend is effectively controlled for. The efficiency of the slope change estimator is acceptable, whereas the variance of the level change estimator may be problematic for highly negatively autocorrelated data series.
Resumo:
Past and current climate change has already induced drastic biological changes. We need projections of how future climate change will further impact biological systems. Modeling is one approach to forecast future ecological impacts, but requires data for model parameterization. As collecting new data is costly, an alternative is to use the increasingly available georeferenced species occurrence and natural history databases. Here, we illustrate the use of such databases to assess climate change impacts on mountain flora. We show that these data can be used effectively to derive dynamic impact scenarios, suggesting upward migration of many species and possible extinctions when no suitable habitat is available at higher elevations. Systematically georeferencing all existing natural history collections data in mountain regions could allow a larger assessment of climate change impact on mountain ecosystems in Europe and elsewhere.
Resumo:
This paper analyzes the profile of Spanish young innovative companies (YICs) and the determinants of innovation and imitation strategies. The results for an extensive sample of 2,221 Spanish firms studied during the period 2004–2010 show that YICs are found in all sectors, although they are more concentrated in high-tech sectors and, in particular, in knowledge-intensive services (KIS). Three of every four YICs are involved in KIS. Our results highlight that financial and knowledge barriers have much impact on the capacity of young, small firms to innovate and to become YICs, whereas market barriers are not obstacles to becoming a YIC. Public funding, in particular from the European Union, makes it easier for a new firm to become a YIC. In addition, YICs are more likely to innovate than mature firms, although they are more susceptible to sectoral and territorial factors. YICs make more dynamic use of innovation and imitation strategies when they operate in high-tech industries and are based in science parks located close to universities. Keywords: innovation strategies, public innovation policies, barriers to innovation, multinomial probit model. JEL Codes: D01, D22 , L60, L80, O31
Resumo:
Risk maps summarizing landscape suitability of novel areas for invading species can be valuable tools for preventing species' invasions or controlling their spread, but methods employed for development of such maps remain variable and unstandardized. We discuss several considerations in development of such models, including types of distributional information that should be used, the nature of explanatory variables that should be incorporated, and caveats regarding model testing and evaluation. We highlight that, in the case of invasive species, such distributional predictions should aim to derive the best hypothesis of the potential distribution of the species by using (1) all distributional information available, including information from both the native range and other invaded regions; (2) predictors linked as directly as is feasible to the physiological requirements of the species; and (3) modelling procedures that carefully avoid overfitting to the training data. Finally, model testing and evaluation should focus on well-predicted presences, and less on efficient prediction of absences; a k-fold regional cross-validation test is discussed.
Resumo:
The purposes of this study were to determine the distribution and climatic patterns of current and future physic nut (Jatropha curcas) cultivation regions in Mexico, and to identify possible locations for in vivo germplasm banks establishment, using geographic information systems. Current climatic data were processed by Floramap software to obtain distribution maps and climatic patterns of regions where wild physic nuts could be found. DIVA-GIS software analyzed current climatic data (Worldclim model) and climatic data generated by CCM3 model to identify current and future physic nut cultivation regions, respectively. The distribution map showed that physic nut was present in most of the tropical and subtropical areas of Mexico, which corresponded to three agroclimatic regions. Climate types were Aw2, Aw1, and Bs1, for regions 1, 2 and 3, respectively. Nontoxic genotypes were associated with region 2, and toxic genotypes were associated with regions 1 and 3. According to the current and future cultivation regions identified, the best suitable ones to establish in vivo germplasm collections were the coast of Michoacán and the Isthmus of Tehuantepec, located among the states of Veracruz, Oaxaca and Chiapas.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
This paper analyses the effect of R&D investment on firm growth. We use an extensive sample of Spanish manufacturing and service firms. The database comprises diverse waves of Spanish Community Innovation Survey and covers the period 2004–2008. First, a probit model corrected for sample selection analyses the role of innovation on the probability of being a high-growth firm (HGF). Second, a quantile regression technique is applied to explore the determinants of firm growth. Our database shows that a small number of firms experience fast growth rates in terms of sales or employees. Our results reveal that R&D investments positively affect the probability of becoming a HGF. However, differences appear between manufacturing and service firms. Finally, when we study the impact of R&D investment on firm growth, quantile estimations show that internal R&D presents a significant positive impact for the upper quantiles, while external R&D shows a significant positive impact up to the median. Keywords : High-growth firms, Firm growth, Innovation activity. JEL Classifications : L11, L25, L26, O30
Resumo:
The production and use of false identity and travel documents in organized crime represent a serious and evolving threat. However, a case-by-case perspective, thus suffering from linkage blindness and a limited analysis capacity, essentially drives the present-day fight against this criminal problem. To assist in overcoming these limitations, a process model was developed using a forensic perspective. It guides the systematic analysis and management of seized false documents to generate forensic intelligence that supports strategic and tactical decision-making in an intelligence-led policing approach. The model is articulated on a three-level architecture that aims to assist in detecting and following-up on general trends, production methods and links between cases or series. Using analyses of a large dataset of counterfeit and forged identity and travel documents, it is possible to illustrate the model, its three levels and their contribution. Examples will point out how the proposed approach assists in detecting emerging trends, in evaluating the black market's degree of structure, in uncovering criminal networks, in monitoring the quality of false documents, and in identifying their weaknesses to orient the conception of more secured travel and identity documents. The process model proposed is thought to have a general application in forensic science and can readily be transposed to other fields of study.
Resumo:
Tutkielman tavoitteena on selvittää lineaarisen regressioanalyysin avulla paneelidataa käyttäen suomalaisten pörssiyritysten pääomarakenteisiin vaikuttavat tekijät vuosina 1999-2004. Näiden tekijöiden avulla päätellään, mitä pääomarakenneteoriaa/-teorioita nämä yritykset noudattavat. Pääomarakenneteoriat voidaan jakaa kahteen luokkaan sen mukaan, pyritäänkö niissä optimaaliseen pääomarakenteeseen vai ei. Tradeoff- ja siihen liittyvässä agenttiteoriassa pyritään optimaaliseen pääomarakenteeseen. Tradeoff-teoriassa pääomarakenne valitaan punnitsemalla vieraan pääoman hyötyjä ja haittoja. Agenttiteoria on muuten samanlainen kuin tradeoff-teoria, mutta siinä otetaan lisäksi huomioon velan agenttikustannukset. Pecking order - ja ajoitusteoriassa ei pyritä optimaaliseen pääoma-rakenteeseen. Pecking order -teoriassa rahoitus valitaan hierarkian mukaan (tulorahoitus, vieras pääoma, välirahoitus, oma pääoma). Ajoitusteoriassa valitaan se rahoitusmuoto, jota on kannattavinta hankkia vallitsevassa markkinatilanteessa. Empiiristen tulosten mukaan velkaantumisaste riippuu positiivisesti riskistä, vakuudesta ja aineettomasta omaisuudesta. Velkaantumisaste riippuu negatiivisesti likviditeetistä, osaketuotoista ja kannattavuudesta. Osingoilla ei ole vaikutusta velkaantumisasteeseen. Toimialoista teollisuustuotteiden ja -palveluiden sekä perusteollisuuden aloilla on korkeammat velkaantumisasteet kuin muilla toimialoilla. Tulokset tukevat pääosin pecking order -teoriaa ja jonkin verran ajoitusteoriaa. Muut teoriat saavat vain vähäistä tukea.
Resumo:
Tämän tutkielman tavoitteena on tutkia peso-ongelmaa sekä devalvaatio-odotuksia seuraavissa Latinalaisen Amerikan maissa: Argentiina, Brasilia, Costa Rica, Uruguay ja Venezuela. Lisäksi tutkitaan, onko peso-ongelmalla mahdollista selittää korkojen epäsäännöllistä käyttäytymistä ennen todellisen devalvaation tapahtumista. Jotta näiden tutkiminen olisi mahdollista, lasketaan markkinoiden odotettu devalvaation todennäköisyys tutkittavissa maissa. Odotettu devalvaation todennäköisyys lasketaan aikavälillä tammikuusta 1996 joulukuuhun 2006 käyttäen kahta erilaista mallia. Korkoero-mallin mukaan maiden välisestä korkoerosta on mahdollista laskea markkinoiden devalvaatio-odotukset. Toiseksi, Probit-mallissa käytetään useita makrotaloudellisia tekijöitä selittävinä muuttujina laskettaessa odotettua devalvaation todennäköisyyttä. Lisäksi tutkitaan, miten yksittäisten makrotaloudellisten muuttujien kehitys vaikuttaa odotettuun devalvaation todennäköisyyteen. Empiiriset tulokset osoittavat, että tutkituissa Latinalaisen Amerikan maissa oli peso-ongelma aikavälillä tammikuusta 1996 joulukuuhun 2006. Korkoero-mallin tulosten mukaan peso-ongelma löytyi kaikista muista tutkituista maista lukuun ottamatta Argentiinaa. Vastaavasti Probit-mallin mukaan peso-ongelma löytyi kaikista tutkituista maista. Tulokset osoittavat myös, että korkojen epäsäännöllinen kehitys ennen varsinaista devalvaatiota on mahdollista selittää peso-ongelmalla. Probit-mallin tulokset osoittavat lisäksi, että makrotaloudellisten muuttujien kehityksellä ei ole mitään tiettyä kaavaa liittyen siihen, kuinka ne vaikuttavat markkinoiden devalvaatio-odotuksiin Latinalaisessa Amerikassa. Pikemmin vaikutukset näyttävät olevan maakohtaisia.
Resumo:
Tämän tutkielman tavoitteena on selvittää, mitkä tekijät vaikuttavat yrityksen ja valtion velkakirjojen väliseen tuottoeroon. Strukturaalisten luottoriskin hinnoittelumallien mukaan luottoriskiin vaikuttavia tekijöitä ovat yrityksen velkaantumisaste, volatiliteetti ja riskitön korkokanta. Tavoitteena on erityisesti tutkia, kuinka hyvin nämä teoreettiset tekijät selittävät tuottoeroja ja onko olemassa muita tärkeitä selittäviä tekijöitä. Luottoriskinvaihtosopimusten noteerauksia käytetään tuottoerojen määrittämiseen. Selittävät tekijät koostuvat sekä yrityskohtaisista että markkinalaajuisista muuttujista. Luottoriskinvaihtosopimusten ja yrityskohtaisten muuttujien data on kerätty yhteensä 50 yritykselle Euroalueen maista. Aineisto koostuu kuukausittaisista havainnoista aikaväliltä 01.01.2003-31.12.2006. Empiiriset tulokset osoittavat, että strukturaalisten mallien mukaiset tekijät selittävät vain pienen osan tuottoeron muutoksista yli ajan. Toisaalta nämä teoreettiset tekijät selittävät huomattavasti paremmin tuottoeron vaihtelua yli poikkileikkauksen. Muut kuin teoreettiset tekijät pystyvät selittämään suuren osan tuottoeron vaihtelusta. Erityisen tärkeäksi tuottoeron selittäväksi tekijäksi osoittautui yleinen riskipreemio velkakirjamarkkinoilla. Tulokset osoittavat, että luottoriskin hinnoittelumalleja on kehitettävä edelleenniin, että ne ottaisivat huomioon yrityskohtaisten tekijöiden lisäksi myös markkinalaajuisia tekijöitä.