67 resultados para Data Structures, Cryptology and Information Theory
Resumo:
The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
In the context of a hostile funding environment, universities are increasingly asked to justify their output in narrowly defined economic terms, and this can be difficult in Humanities or Arts faculties where productivity is rarely reducible to a simple financial indicator. This can lead to a number of immediate consequences that I have no need to rehearse here, but can also result in some interesting tensions within the academic community itself. First is that which has become known as the ‘Science Wars’: the increasingly acrimonious exchanges between scientists and scientific academics and cultural critics or theorists about who has the right to describe the world. Much has already been said—and much remains to be said—about this issue, but it is not my intention to discuss it here. Rather, I will look at a second area of contestation: the incorporation of scientific theory into literary or cultural criticism. Much of this work comes from a genuine commitment to interdisciplinarity, and an appreciation of insights that a fresh perspective can bring to a familiar object. However, some can be seen as cynical attempts to lend literary studies the sort of empirical legitimacy of the sciences. In particular, I want to look at a number of critics who have applied information theory to the literary work. In this paper, I will examine several instances of this sort of criticism, and then, through an analysis of a novel by American author Richard Powers, Three Farmers on Their Way to a Dance, show how this sort of criticism merely reduces the meaningful analysis of a complex literary text.
Resumo:
In this paper, we develop a theory for diffusion and flow of pure sub-critical adsorbates in microporous activated carbon over a wide range of pressure, ranging from very low to high pressure, where capillary condensation is occurring. This theory does not require any fitting parameter. The only information needed for the prediction is the complete pore size distribution of activated carbon. The various interesting behaviors of permeability versus loading are observed such as the maximum permeability at high loading (occurred at about 0.8-0.9 relative pressure). The theory is tested with diffusion and flow of benzene through a commercial activated carbon, and the agreement is found to be very good in the light that there is no fitting parameter in the model. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This study explored the impact of downsizing on levels of uncertainty, coworker and management trust, and communicative effectiveness in a health care organization downsizing during a 2-year period from 660 staff to 350 staff members. Self-report data were obtained from employees who were staying (survivors), from employees were being laid off (victims), and from employees with and without managerial responsibilities. Results indicated that downsizing had a similar impact on the amount of trust that survivors and victims had for management. However, victims reported feeling lower levels of trust toward their colleagues compared with survivors. Contrary to expectations, survivors and victims reported similar perceptions of job and organizational uncertainty and similar levels of information received about changes. Employees with no management responsibilities and middle managers both reported lower scores than did senior managers on all aspects of information received. Implications for practice and the management of the communication process are discussed.
Resumo:
The aim of this report is to describe the use of WinBUGS for two datasets that arise from typical population pharmacokinetic studies. The first dataset relates to gentamicin concentration-time data that arose as part of routine clinical care of 55 neonates. The second dataset incorporated data from 96 patients receiving enoxaparin. Both datasets were originally analyzed by using NONMEM. In the first instance, although NONMEM provided reasonable estimates of the fixed effects parameters it was unable to provide satisfactory estimates of the between-subject variance. In the second instance, the use of NONMEM resulted in the development of a successful model, albeit with limited available information on the between-subject variability of the pharmacokinetic parameters. WinBUGS was used to develop a model for both of these datasets. Model comparison for the enoxaparin dataset was performed by using the posterior distribution of the log-likelihood and a posterior predictive check. The use of WinBUGS supported the same structural models tried in NONMEM. For the gentamicin dataset a one-compartment model with intravenous infusion was developed, and the population parameters including the full between-subject variance-covariance matrix were available. Analysis of the enoxaparin dataset supported a two compartment model as superior to the one-compartment model, based on the posterior predictive check. Again, the full between-subject variance-covariance matrix parameters were available. Fully Bayesian approaches using MCMC methods, via WinBUGS, can offer added value for analysis of population pharmacokinetic data.
Resumo:
Networked information and communication technologies are rapidly advancing the capacities of governments to target and separately manage specific sub-populations, groups and individuals. Targeting uses data profiling to calculate the differential probabilities of outcomes associated with various personal characteristics. This knowledge is used to classify and sort people for differentiated levels of treatment. Targeting is often used to efficiently and effectively target government resources to the most disadvantaged. Although having many benefits, targeting raises several policy and ethical issues. This paper discusses these issues and the policy responses governments may take to maximise the benefits of targeting while ameliorating the negative aspects.
Resumo:
The Systems Theory Framework was developed to produce a metatheoretical framework through which the contribution of all theories to our understanding of career behaviour could be recognised. In addition it emphasises the individual as the site for the integration of theory and practice. Its utility has become more broadly acknowledged through its application to a range of cultural groups and settings, qualitative assessment processes, career counselling, and multicultural career counselling. For these reasons, the STF is a very valuable addition to the field of career theory. In viewing the field of career theory as a system, open to changes and developments from within itself and through constantly interrelating with other systems, the STF and this book is adding to the pattern of knowledge and relationships within the career field. The contents of this book will be integrated within the field as representative of a shift in understanding existing relationships within and between theories. In the same way, each reader will integrate the contents of the book within their existing views about the current state of career theory and within their current theory-practice relationship. This book should be required reading for anyone involved in career theory. It is also highly suitable as a text for an advanced career counselling or theory course.
Resumo:
In order to examine whether different populations show the same pattern of onset in the Southern Hemisphere, we examined the age-at-first-admission distribution for schizophrenia based on mental health registers from Australia and Brazil. Data on age-at-first-admission for individuals with schizophrenia were extracted from two names-linked registers, (1) the Queensland Mental Health Statistics System, Australia (N=7651, F= 3293, M=4358), and (2) a psychiatric hospital register in Pelotas, Brazil (N=4428, F=2220, M=2208). Age distributions were derived for males and females for both datasets. The general population structure tbr both countries was also obtained. There were significantly more males in the Queensland dataset (gz = 56.9, df3, p < 0.0001 ). Both dataset distributions were skewed to the right. Onset rose steeply after puberty to reach a modal age group of 20-29 for men and women, with a more gradual tail toward the older age groups. In Queensland 68% of women with schizophrenia had their first admissions after age 30, while the proportion from Brazil was 58%. Compared to the Australian dataset, the Brazilian dataset had a slightly greater proportion of first admissions under the age 30 and a slightly smaller proportion over the age of 60 years. This reflects the underlying age distributions of the two populations. This study confirms the wide age range and gender differences in age-at-first-admission distributions for schizophrenia and identified a significant difference in the gender ratio between the two datasets. Given widely differing health services, cultural practices, ethic variability, and the different underlying population distributions, the age-at-first-admission in Queensland and Brazil showed more similarities than differences. Acknowledgments: The Stanley Foundation supported this project.
Resumo:
OBJECTIVE: To describe variation in all cause and selected cause-specific mortality rates across Australia. METHODS: Mortality and population data for 1997 were obtained from the Australian Bureau of Statistics. All cause and selected cause-specific mortality rates were calculated and directly standardised to the 1997 Australian population in 5-year age groups. Selected major causes of death included cancer, coronary artery disease, cerebrovascular disease, diabetes, accidents and suicide. Rates are reported by statistical division, and State and Territory. RESULTS: All cause age-standardised mortality was 6.98 per 1000 in 1997 and this varied 2-fold from a low in the statistical division of Pilbara, Western Australia (5.78, 95% confidence interval 5.06-6.56), to a high in Northern Territory-excluding Darwin (11.30, 10.67-11.98). Similar mortality variation (all p<0.0001) exists for cancer (1.01-2.23 per 1000) and coronary artery disease (0.99-2.23 per 1000), the two biggest killers. Larger variation (all p<0.0001) exists for cerebrovascular disease (0.7-11.8 per 10,000), diabetes (0.7-6.9 per 10,000), accidents (1.7-7.2 per 10,000) and suicide (0.6-3.8 per 10,000). Less marked variation was observed when analysed by State and Territory. but Northern Territory consistently has the highest age-standardised mortality rates. CONCLUSIONS: Analysed by statistical division, substantial mortality gradients exist across Australia, suggesting an inequitable distribution of the determinants of health. Further research is required to better understand this heterogeneity.