920 resultados para Sub-registry. Empirical bayesian estimator. General equation. Balancing adjustment factor
Resumo:
The inverse controller is traditionally assumed to be a deterministic function. This paper presents a pedagogical methodology for estimating the stochastic model of the inverse controller. The proposed method is based on Bayes' theorem. Using Bayes' rule to obtain the stochastic model of the inverse controller allows the use of knowledge of uncertainty from both the inverse and the forward model in estimating the optimal control signal. The paper presents the methodology for general nonlinear systems. For illustration purposes, the proposed methodology is applied to linear Gaussian systems. © 2004 IEEE.
Resumo:
CO vibrational spectra over catalytic nanoparticles under high coverages/pressures are discussed from a DFT perspective. Hybrid B3LYP and PBE DFT calculations of CO chemisorbed over Pd4 and Pd13 nanoclusters, and a 1.1 nm Pd38 nanoparticle, have been performed in order to simulate the corresponding coverage dependent infrared (IR) absorption spectra, and hence provide a quantitative foundation for the interpretation of experimental IR spectra of CO over Pd nanocatalysts. B3LYP simulated IR intensities are used to quantify site occupation numbers through comparison with experimental DRIFTS spectra, allowing an atomistic model of CO surface coverage to be created. DFT adsorption energetics for low CO coverage (θ → 0) suggest the CO binding strength follows the order hollow > bridge > linear, even for dispersion-corrected functionals for sub-nanometre Pd nanoclusters. For a Pd38 nanoparticle, hollow and bridge-bound are energetically similar (hollow ≈ bridge > atop). It is well known that this ordering has not been found at the high coverages used experimentally, wherein atop CO has a much higher population than observed over Pd(111), confirmed by our DRIFTS spectra for Pd nanoparticles supported on a KIT-6 silica, and hence site populations were calculated through a comparison of DFT and spectroscopic data. At high CO coverage (θ = 1), all three adsorbed CO species co-exist on Pd38, and their interdiffusion is thermally feasible at STP. Under such high surface coverages, DFT predicts that bridge-bound CO chains are thermodynamically stable and isoenergetic to an entirely hollow bound Pd/CO system. The Pd38 nanoparticle undergoes a linear (3.5%), isotropic expansion with increasing CO coverage, accompanied by 63 and 30 cm− 1 blue-shifts of hollow and linear bound CO respectively.
Resumo:
Purpose – This paper aims to apply the business-to-business (B2B) Service Brand Identity (SBI) scale to empirically assess the influence of service brand identity on brand performance for the first time. Design/methodology/approach – Based on data collected from 421 senior marketing executives, this paper applies the B2B SBI and structural equation modeling to fulfill the above purpose. Findings – Brand personality and human resource initiatives have a positive and significant influence on brand performance. Corporate visual identity, in addition to an employee and client focus, has an insignificant impact on performance. Consistent communications have a negative and significant influence on brand performance. Research limitations/implications – Data were only collected from executives in the UK. This research would benefit from replicative studies. Practical implications – This research empirically establishes the brand management activities that drive brand performance. Originality/value – This is the first empirical study to assess the influence service brand identity has on brand performance.
Resumo:
The real purpose of collecting big data is to identify causality in the hope that this will facilitate credible predictivity . But the search for causality can trap one into infinite regress, and thus one takes refuge in seeking associations between variables in data sets. Regrettably, the mere knowledge of associations does not enable predictivity. Associations need to be embedded within the framework of probability calculus to make coherent predictions. This is so because associations are a feature of probability models, and hence they do not exist outside the framework of a model. Measures of association, like correlation, regression, and mutual information merely refute a preconceived model. Estimated measures of associations do not lead to a probability model; a model is the product of pure thought. This paper discusses these and other fundamentals that are germane to seeking associations in particular, and machine learning in general. ACM Computing Classification System (1998): H.1.2, H.2.4., G.3.
Resumo:
2010 Mathematics Subject Classification: 62F12, 62M05, 62M09, 62M10, 60G42.
Resumo:
2010 Mathematics Subject Classification: 35R60, 60H15, 74H35.
Resumo:
With the development of social media tools such as Facebook and Twitter, mainstream media organizations including newspapers and TV media have played an active role in engaging with their audience and strengthening their influence on the recently emerged platforms. In this paper, we analyze the behavior of mainstream media on Twitter and study how they exert their influence to shape public opinion during the UK's 2010 General Election. We first propose an empirical measure to quantify mainstream media bias based on sentiment analysis and show that it correlates better with the actual political bias in the UK media than the pure quantitative measures based on media coverage of various political parties. We then compare the information diffusion patterns from different categories of sources. We found that while mainstream media is good at seeding prominent information cascades, its role in shaping public opinion is being challenged by journalists since tweets from them are more likely to be retweeted and they spread faster and have longer lifespan compared to tweets from mainstream media. Moreover, the political bias of the journalists is a good indicator of the actual election results. Copyright 2013 ACM.
Resumo:
2000 Mathematics Subject Classification: 47H04, 65K10.
Resumo:
The theory and experimental applications of optical Airy beams are in active development recently. The Airy beams are characterised by very special properties: they are non-diffractive and propagate along parabolic trajectories. Among the striking applications of the optical Airy beams are optical micro-manipulation implemented as the transport of small particles along the parabolic trajectory, Airy-Bessel linear light bullets, electron acceleration by the Airy beams, plasmonic energy routing. The detailed analysis of the mathematical aspects as well as physical interpretation of the electromagnetic Airy beams was done by considering the wave as a function of spatial coordinates only, related by the parabolic dependence between the transverse and the longitudinal coordinates. Their time dependence is assumed to be harmonic. Only a few papers consider a more general temporal dependence where such a relationship exists between the temporal and the spatial variables. This relationship is derived mostly by applying the Fourier transform to the expressions obtained for the harmonic time dependence or by a Fourier synthesis using the specific modulated spectrum near some central frequency. Spatial-temporal Airy pulses in the form of contour integrals is analysed near the caustic and the numerical solution of the nonlinear paraxial equation in time domain shows soliton shedding from the Airy pulse in Kerr medium. In this paper the explicitly time dependent solutions of the electromagnetic problem in the form of time-spatial pulses are derived in paraxial approximation through the Green's function for the paraxial equation. It is shown that a Gaussian and an Airy pulse can be obtained by applying the Green's function to a proper source current. We emphasize that the processes in time domain are directional, which leads to unexpected conclusions especially for the paraxial approximation.
Resumo:
Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.
Resumo:
A tanulmányban 25 ország, kétezres évek közepi állapotot tükröző, reprezentatív keresztmetszeti mintáin egyrészt a Duncan-Hoffman-féle modellre támaszkodva megvizsgáljuk, hogy adatbázisunk milyen mértékben tükrözi az illeszkedés bérhozamával foglalkozó irodalom legfontosabb empirikus következtetéseit, másrészt - a Hartog- Oosterbeek-szerzőpáros által javasolt statisztikai próbák segítségével - azt elemezzük, hogy a becslések eredményei alapján mit mondhatunk Mincer emberitőke-, valamint Thurow állásversenymodelljének érvényességéről. Heckman szelekciós torzítást kiküszöbölő becslőfüggvényén alapuló eredményeink jórészt megerősítik az irodalomban vázolt legfontosabb empirikus sajátosságokat, ugyanakkor a statisztikai próbák az országok többségére nézve cáfolják mind az emberi tőke, mind az állásverseny modelljének empirikus érvényességét. / === / Using the Duncan–Hoffman model, the paper estimates returns for educational mismatch using comparable micro data for 25 European countries. The aim is to gauge the extent to which the main empirical regularities shown in other papers on the subject are confirmed by this data base. Based on tests proposed by Hartog and Oosterbeek, the author also considers whether the observed empirical patterns accord with the Mincerian basic human-capital model and Thurow's job-competition model. Heckman's sample-selection estimator shows the returns to be fairly consistent with those found in the literature; the job-competition model and the Mincerian human-capital model can be rejected for most countries.
Resumo:
A kockázat statisztikai értelemben közvetlenül nem mérhető, azaz látens fogalom éppen úgy, mint a gazdasági fejlettség, a szervezettség vagy az intelligencia. Mi bennünk a közös? A kockázat is komplex fogalom, több mérhető tényezőt foglal magában, és bár sok tényezőjét mérjük, fel sem tételezzük, hogy pontos eredményt kapunk. Ebben a megközelítésben az elemző kezdettől fogva tudja, hogy hiányos az ismerete. Ezt Bélyácz [2011[ nyomán úgy is megfogalmazhatjuk: „A statisztikusok tudják, hogy valamit éppen nem tudnak.” / === / From statistical point of view risk, like economic development is a latent concept. Typically there is no one number which can explicitly estimate or project risk. Variance is used as a proxy in finance to measure risk. Other professions are using other concepts for risk. Underwriting is the most important step in insurance business to analyse exposure. Actuaries evaluate average claim size and the probability of claim to calculate risk. Bayesian credibility can be used to calculate insurance premium combining frequencies and empirical knowledge, as a prior. Different types of risks can be classified into a risk matrix to separate insurable risk. Only this category can be analysed by multivariate statistical methods, which are based on statistical data. Sample size and frequency of events are relevant not only in insurance, but in pension and investment decisions as well.
Resumo:
This paper reviews the expected effects of the current financial crisis and subsequent recession on the rural landscape, in particular the agri-food sector in Europe and Central Asia (ECA) on the basis of the structure of the rural economy and of different organisations and institutions. Empirical evidence suggests that the crisis has hit the ECA region the hardest. Agriculture contributes about 9% to gross domestic product (GDP) for the ECA region as a whole with 16% of the population being employed in the agricultural sector. As far as the impact of the financial crisis on the agri-food sector is concerned, there are a few interconnected issues: (1) reduction in income elastic food demand and commodity price decline, (2) loss of employment and earnings of rural people working in urban centres, implying also costly labour reallocation, (3) rising rural poverty originating mainly from lack of opportunities in the non-farm sector and a sizable decline of international remittances, (4) tightening of agricultural credit markets, and the (5) collapse of sectoral government support programs and social safety-net measures in many countries. The paper reveals how the crisis hit farming and broader agri-business differently in general and in the ECA sub-regions.
Resumo:
In recent decades, spillover has become a highly influential concept which has led to the initiation of new theoretical and methodological approaches that are designed to understand how people attempt to reconcile their work and private lives. The very notion of spillover presupposes that these spheres are connected, since the people who move between them bring certain ‘less visible’ content with them such as cognitive or affective mental constructs, skills, behaviors, etc. This paper attempts to create fresh insight into the different areas, themes and methodologies related to how spillover has been addressed over the last ten years. Four main categories are discussed based on the 76 academic articles that were selected: (1) general spillover research, (2) job flexibility and spillover, (3) individual coping strategies, and (4) the spillover effect on the different genders. The final section of the paper provides a tentative synthesis of the main conclusions and findings from the examined papers.
Resumo:
This dissertation examines the consequences of Electronic Data Interchange (EDI) use on interorganizational relations (IR) in the retail industry. EDI is a type of interorganizational information system that facilitates the exchange of business documents in structured, machine processable form. The research model links EDI use and three IR dimensions--structural, behavioral, and outcome. Based on relevant literature from organizational theory and marketing channels, fourteen hypotheses were proposed for the relationships among EDI use and the three IR dimensions.^ Data were collected through self-administered questionnaires from key informants in 97 retail companies (19% response rate). The hypotheses were tested using multiple regression analysis. The analysis supports the following hypothesis: (a) EDI use is positively related to information intensity and formalization, (b) formalization is positively related to cooperation, (c) information intensity is positively related to cooperation, (d) conflict is negatively related to performance and satisfaction, (e) cooperation is positively related to performance, and (f) performance is positively related to satisfaction. The results support the general premise of the model that the relationship between EDI use and satisfaction among channel members has to be viewed within an interorganizational context.^ Research on EDI is still in a nascent stage. By identifying and testing relevant interorganizational variables, this study offers insights for practitioners managing boundary-spanning activities in organizations using or planning to use EDI. Further, the thesis provides avenues for future research aimed at understanding the consequences of this interorganizational information technology. ^