900 resultados para Data Structures, Cryptology and Information Theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To mark the two year anniversary since The Marmot Review ('Fair Society, Healthy Lives') was published, on the 15th of February the UCL Institute of Health Equity published new data on key health inequalities indicators at local authority level in England.Main Findings:Life Expectancy – this has historically been one of the main indicators of health inequalities.The Marmot Indicators from this year’s charts show the average life expectancy for eachlocal authority and the level of inequality within each authority area (7):-While overall life expectancy at birth in England increased by 0.3 years for both menand women between 2007-9 and 2008-10, inequalities in life expectancy betweenneighbourhoods increased by 0.1 years for men and showed no change for women-Among the 150 upper tier local authorities in England, life expectancy improved inthe majority of cases (133 areas saw improvements for men and 125 sawimprovements for women). However inequalities also increased in the majority ofareas (104 for men and 92 for women).-The largest increase in inequality in life expectancy was in West Berkshire for men(2.0 years) and inMiddlesbrough for women (2 years). The largest decreases ininequality were in Kensington and Chelsea for both men and women (1.9 and 1.1years respectively. To find out more, please read: - The press release, including key figures and main findings. - A blog by Michael Marmot about the data and it's implications. - Press coverage of the data in national and local newspapers and websites. - A powerpoint presentation on the key findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Can we reconcile the predictions of the altruism model of the familywith the evidence on intervivos transfers in the US? This paper expandsthe altruism model by introducing e ?ort of the child and by relaxingthe assumption of perfect information of the parent about the labormarket opportunities of the child. First, I solve and simulate a modelof altruism under imperfect information. Second, I use cross-sectionaldata to test a prediction of the model: Are parental transfers especiallyresponsive to the income variations of children who are very attached tothe labor market? The results suggest that imperfect information accountsfor several patterns of intergenerational transfers in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present view-dependent information theory quality measures for pixel sampling and scene discretization in flatland. The measures are based on a definition for the mutual information of a line, and have a purely geometrical basis. Several algorithms exploiting them are presented and compare well with an existing one based on depth differences

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this master’s thesis was twofold: first to examine the concept of customer value and its drivers and second to identify information use practices. The first part of the study represents explorative research that was carried out by examining a case company’s customer satisfaction data that was used to identify sales and technical customer service related value drivers on a detailed attribute level. This was followed by an examination of whether these attributes had been commented on in a positive or a negative light and what were the reasons why the case company had received higher or lower ratings than its competitor. As a result a classification of different sales and technical customer service related attributes was created. The results indicated that the case company has performed well, but that the results varied on the company’s business segment level. The case company’s staff, service and the benefits from a long-lasting relationship came up in a positive light whereas attitude, flexibility and reaction time came up in a negative light. The reasons for a higher or lower score in comparison to competitor varied. The results indicated that a customer’s satisfaction with the company’s performance did not always mean that the company was outperforming the competition. The second part of the study focused on customer satisfaction information use from the viewpoints of information access, dissemination and reaction. The study was conducted by running an internal survey among the case company’s staff. The results showed that information use practices varied across the company and some units or teams had taken a more proactive approach to the information use than others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La théorie de l'information quantique étudie les limites fondamentales qu'imposent les lois de la physique sur les tâches de traitement de données comme la compression et la transmission de données sur un canal bruité. Cette thèse présente des techniques générales permettant de résoudre plusieurs problèmes fondamentaux de la théorie de l'information quantique dans un seul et même cadre. Le théorème central de cette thèse énonce l'existence d'un protocole permettant de transmettre des données quantiques que le receveur connaît déjà partiellement à l'aide d'une seule utilisation d'un canal quantique bruité. Ce théorème a de plus comme corollaires immédiats plusieurs théorèmes centraux de la théorie de l'information quantique. Les chapitres suivants utilisent ce théorème pour prouver l'existence de nouveaux protocoles pour deux autres types de canaux quantiques, soit les canaux de diffusion quantiques et les canaux quantiques avec information supplémentaire fournie au transmetteur. Ces protocoles traitent aussi de la transmission de données quantiques partiellement connues du receveur à l'aide d'une seule utilisation du canal, et ont comme corollaires des versions asymptotiques avec et sans intrication auxiliaire. Les versions asymptotiques avec intrication auxiliaire peuvent, dans les deux cas, être considérées comme des versions quantiques des meilleurs théorèmes de codage connus pour les versions classiques de ces problèmes. Le dernier chapitre traite d'un phénomène purement quantique appelé verrouillage: il est possible d'encoder un message classique dans un état quantique de sorte qu'en lui enlevant un sous-système de taille logarithmique par rapport à sa taille totale, on puisse s'assurer qu'aucune mesure ne puisse avoir de corrélation significative avec le message. Le message se trouve donc «verrouillé» par une clé de taille logarithmique. Cette thèse présente le premier protocole de verrouillage dont le critère de succès est que la distance trace entre la distribution jointe du message et du résultat de la mesure et le produit de leur marginales soit suffisamment petite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present view-dependent information theory quality measures for pixel sampling and scene discretization in flatland. The measures are based on a definition for the mutual information of a line, and have a purely geometrical basis. Several algorithms exploiting them are presented and compare well with an existing one based on depth differences

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate a specific quantitative definition of information flow. We prove results relating equivalence relations, interference of program variables, independence of random variables and the flow of confidential information. Finally, we show how, in our setting, Shannon’s Perfect Secrecy theorem provides a sufficient condition to determine whether a program leaks confidential information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.