833 resultados para Multi-model inference
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
This article studies how product introduction decisions relate to profitability and uncertainty in the context of multi-product firms and product differentiation. These two features, common to many modern industries, have not received much attention in the literature as compared to the classical problem of firm entry, even if the determinants of firm and product entry are quite different. The theoretical predictions about the sign of the impact of uncertainty on product entry are not conclusive. Therefore, an econometric model relating firms’ product introduction decisions with profitability and profit uncertainty is proposed. Firm’s estimated profits are obtained from a structural model of product demand and supply, and uncertainty is proxied by profits’ variance. The empirical analysis is carried out using data on the Spanish car industry for the period 1990-2000. The results show a positive relationship between product introduction and profitability, and a negative one with respect to profit variability. Interestingly, the degree of uncertainty appears to be a driving force of entry stronger than profitability, suggesting that the product proliferation process in the Spanish car market may have been mainly a consequence of lower uncertainty rather than the result of having a more profitable market
Resumo:
Theories on social capital and on social entrepreneurship have mainly highlighted the attitude of social capital to generate enterprises and to foster good relations between third sector organizations and the public sector. This paper considers the social capital in a specific third sector enterprise; here, multi-stakeholder social cooperatives are seen, at the same time, as social capital results, creators and incubators. In the particular enterprises that identify themselves as community social enterprises, social capital, both as organizational and relational capital, is fundamental: SCEs arise from but also produce and disseminate social capital. This paper aims to improve the building of relational social capital and the refining of helpful relations drawn from other arenas, where they were created and from where they are sometimes transferred to other realities, where their role is carried on further (often working in non-profit, horizontally and vertically arranged groups, where they share resources and relations). To represent this perspective, we use a qualitative system dynamic approach in which social capital is measured using proxies. Cooperation of volunteers, customers, community leaders and third sector local organizations is fundamental to establish trust relations between public local authorities and cooperatives. These relations help the latter to maintain long-term contracts with local authorities as providers of social services and enable them to add innovation to their services, by developing experiences and management models and maintaining an interchange with civil servants regarding these matters. The long-term relations and the organizational relations linking SCEs and public organizations help to create and to renovate social capital. Thus, multi-stakeholder cooperatives originated via social capital developed in third sector organizations produce new social capital within the cooperatives themselves and between different cooperatives (entrepreneurial components of the third sector) and the public sector. In their entrepreneurial life, cooperatives have to contrast the "working drift," as a result of which only workers remain as members of the cooperative, while other stakeholders leave the organization. Those who are not workers in the cooperative are (stake)holders with "weak ties," who are nevertheless fundamental in making a worker's cooperative an authentic social multi-stakeholders cooperative. To maintain multi-stakeholder governance and the relations with third sector and civil society, social cooperatives have to reinforce participation and dialogue with civil society through ongoing efforts to include people that provide social proposals. We try to represent these processes in a system dynamic model applied to local cooperatives, measuring the social capital created by the social cooperative through proxies, such as number of volunteers and strong cooperation with public institutions. Using a reverse-engineering approach, we can individuate the determinants of the creation of social capital and thereby give support to governance that creates social capital.
Resumo:
This work analyzes whether the relationship between risk and returns predicted by the Capital Asset Pricing Model (CAPM) is valid in the Brazilian stock market. The analysis is based on discrete wavelet decomposition on different time scales. This technique allows to analyze the relationship between different time horizons, since the short-term ones (2 to 4 days) up to the long-term ones (64 to 128 days). The results indicate that there is a negative or null relationship between systemic risk and returns for Brazil from 2004 to 2007. As the average excess return of a market portfolio in relation to a risk-free asset during that period was positive, it would be expected this relationship to be positive. That is, higher systematic risk should result in higher excess returns, which did not occur. Therefore, during that period, appropriate compensation for systemic risk was not observed in the Brazilian market. The scales that proved to be most significant to the risk-return relation were the first three, which corresponded to short-term time horizons. When treating differently, year-by-year, and consequently separating positive and negative premiums, some relevance is found, during some years, in the risk/return relation predicted by the CAPM. However, this pattern did not persist throughout the years. Therefore, there is not any evidence strong enough confirming that the asset pricing follows the model.
Resumo:
The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.
Resumo:
As a result of globalization and free trade agreements, international trade is enormously growing and inevitably putting more pressure on the environment over the last few decades. This has drawn the attention of both environmentalist and economist in response to the ever growing concerns of climate change and urgent need of international action for its mitigation. In this work we aim at analyzing the implication of international trade in terms of CO2 between Spain and its important partners using a multi-regional input-output (MRIO) model. A fully integrated 13 regions MRIO model is constructed to examine the pollution responsibility of Spain both from production and consumption perspectives. The empirical results show that Spain is a net importer of CO2 emissions which is equivalent to 29% of its emission due to production. Even though the leading partner with regard to import values are countries such as Germany, France, Italy and Great Britain, the CO2 embodied due to trade with China takes the largest share. This is mainly due to the importation of energy intensive products from China coupled with Chinese poor energy mix which is dominated by coal-power plant. The largest portion (67%) of the global imported CO2 emissions is due to intermediate demand requirements by production sectors. Products such as Motor vehicles, chemicals, a variety of machineries and equipments, textile and leather products, construction materials are the key imports that drive the emissions due to their production in the respective exporting countries. Being at its peak in 2005, the Construction sector is the most responsible activity behind both domestic and imported emissions.
Resumo:
In this paper we present the theoretical and methodologicalfoundations for the development of a multi-agentSelective Dissemination of Information (SDI) servicemodel that applies Semantic Web technologies for specializeddigital libraries. These technologies make possibleachieving more efficient information management,improving agent–user communication processes, andfacilitating accurate access to relevant resources. Othertools used are fuzzy linguistic modelling techniques(which make possible easing the interaction betweenusers and system) and natural language processing(NLP) techniques for semiautomatic thesaurus generation.Also, RSS feeds are used as “current awareness bulletins”to generate personalized bibliographic alerts.
Resumo:
The recent availability of the chicken genome sequence poses the question of whether there are human protein-coding genes conserved in chicken that are currently not included in the human gene catalog. Here, we show, using comparative gene finding followed by experimental verification of exon pairs by RT–PCR, that the addition to the multi-exonic subset of this catalog could be as little as 0.2%, suggesting that we may be closing in on the human gene set. Our protocol, however, has two shortcomings: (i) the bioinformatic screening of the predicted genes, applied to filter out false positives, cannot handle intronless genes; and (ii) the experimental verification could fail to identify expression at a specific developmental time. This highlights the importance of developing methods that could provide a reliable estimate of the number of these two types of genes.
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
Evaluating the possible benefits of the introduction of genetically modified (GM) crops must address the issue of consumer resistance as well as the complex regulation that has ensued. In the European Union (EU) this regulation envisions the “co-existence” of GM food with conventional and quality-enhanced products, mandates the labelling and traceability of GM products, and allows only a stringent adventitious presence of GM content in other products. All these elements are brought together within a partial equilibrium model of the EU agricultural food sector. The model comprises conventional, GM and organic food. Demand is modelled in a novel fashion, whereby organic and conventional products are treated as horizontally differentiated but GM products are vertically differentiated (weakly inferior) relative to conventional ones. Supply accounts explicitly for the land constraint at the sector level and for the need for additional resources to produce organic food. Model calibration and simulation allow insights into the qualitative and quantitative effects of the large-scale introduction of GM products in the EU market. We find that the introduction of GM food reduces overall EU welfare, mostly because of the associated need for costly segregation of non-GM products, but the producers of quality-enhanced products actually benefit.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
This article presents a formal model of policy decision-making in an institutional framework of separation of powers in which the main actors are pivotal political parties with voting discipline. The basic model previously developed from pivotal politics theory for the analysis of the United States lawmaking is here modified to account for policy outcomes and institutional performances in other presidential regimes, especially in Latin America. Legislators' party indiscipline at voting and multi-partism appear as favorable conditions to reduce the size of the equilibrium set containing collectively inefficient outcomes, while a two-party system with strong party discipline is most prone to produce 'gridlock', that is, stability of socially inefficient policies. The article provides a framework for analysis which can induce significant revisions of empirical data, especially regarding the effects of situations of (newly defined) unified and divided government, different decision rules, the number of parties and their discipline. These implications should be testable and may inspire future analytical and empirical work.
Resumo:
Mathematical methods combined with measurements of single-cell dynamics provide a means to reconstruct intracellular processes that are only partly or indirectly accessible experimentally. To obtain reliable reconstructions, the pooling of measurements from several cells of a clonal population is mandatory. However, cell-to-cell variability originating from diverse sources poses computational challenges for such process reconstruction. We introduce a scalable Bayesian inference framework that properly accounts for population heterogeneity. The method allows inference of inaccessible molecular states and kinetic parameters; computation of Bayes factors for model selection; and dissection of intrinsic, extrinsic and technical noise. We show how additional single-cell readouts such as morphological features can be included in the analysis. We use the method to reconstruct the expression dynamics of a gene under an inducible promoter in yeast from time-lapse microscopy data.
Resumo:
This paper proposes a method to conduct inference in panel VAR models with cross unit interdependencies and time variations in the coefficients. The approach can be used to obtain multi-unit forecasts and leading indicators and to conduct policy analysis in a multiunit setups. The framework of analysis is Bayesian and MCMC methods are used to estimate the posterior distribution of the features of interest. The model is reparametrized to resemble an observable index model and specification searches are discussed. As an example, we construct leading indicators for inflation and GDP growth in the Euro area using G-7 information.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.