129 resultados para Two Approaches

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Few studies have evaluated the reliability of lifetime sun exposure estimated from inquiring about the number of hours people spent outdoors in a given period on a typical weekday or weekend day (the time-based approach). Some investigations have suggested that women have a particularly difficult task in estimating time outdoors in adulthood due to their family and occupational roles. We hypothesized that people might gain additional memory cues and estimate lifetime hours spent outdoors more reliably if asked about time spent outdoors according to specific activities (an activity-based approach). Using self-administered, mailed questionnaires, test-retest responses to time-based and to activity-based approaches were evaluated in 124 volunteer radiologic technologist participants from the United States: 64 females and 60 males 48 to 80 years of age. Intraclass correlation coefficients (ICC) were used to evaluate the test-retest reliability of average number of hours spent outdoors in the summer estimated for each approach. We tested the differences between the two ICCs, corresponding to each approach, using a t test with the variance of the difference estimated by the jackknife method. During childhood and adolescence, the two approaches gave similar ICCs for average numbers of hours spent outdoors in the summer. By contrast, compared with the time-based approach, the activity-based approach showed significantly higher ICCs during adult ages (0.69 versus 0.43, P = 0.003) and over the lifetime (0.69 versus 0.52, P = 0.05); the higher ICCs for the activity-based questionnaire were primarily derived from the results for females. Research is needed to further improve the activity-based questionnaire approach for long-term sun exposure assessment. (Cancer Epidemiol Biomarkers Prev 2009;18(2):464–71)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Because studies of crowding in long-term care settings are lacking, the authors sought to: (1) generate initial estimates of crowding in nursing homes and assisted living facilities; and (2) evaluate two operational approaches to its measurement. ----- ----- Background: Reactions to density and proximity are complex. Greater density intensifies people's reaction to a situation in the direction (positive or negative) that they would react if the situation were to occur under less dense conditions. People with dementia are especially reactive to the environment. ----- ----- Methods: Using a cross-sectional correlational design in nursing homes and assisted living facilities involving 185 participants, multiple observations (N = 6,455) of crowding and other environmental variables were made. Crowding, location, and sound were measured three times per observation; ambiance was measured once. Data analyses consisted of descriptive statistics, t-tests, and one-way analysis of variance. ----- ----- Results: Crowding estimates were higher for nursing homes and in dining and activity rooms. Crowding also varied across settings and locations by time of day. Overall, the interaction of location and time affected crowding significantly (N = 5,559, df [47, 511], F = 105.69, p < .0001); effects were greater within location-by-hour than between location-by-hour, but the effect explained slightly less variance in Long-Term Care Crowding Index (LTC-CI) estimates (47.41%) than location alone. Crowding had small, direct, and highly significant correlations with sound and with the engaging subscale for ambiance; a similar, though inverse, correlation was seen with the soothing subscale for ambiance. ----- ----- Conclusions: Crowding fluctuates consistent with routine activities such as meals in long-term care settings. Furthermore, a relationship between crowding and other physical characteristics of the environment was found. The LTC-CI is likely to be more sensitive than simple people counts when seeking to evaluate the effects of crowding on the behavior of elders-particularly those with dementia-in long-term care settings. aging in place.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis investigates the phenomenon of self-harm as a form of political protest using two different, but complementary, methods of inquiry: a theoretical research project and a novel. Through these two approaches, to the same research problem, I examine how we can re-position the body that self-harms in political protest from weapon to voice; and in doing so find a path towards ethical and equitable dialogue between marginalised and mainstream communities. The theoretical, or academic, portion of the thesis examines self-harm as protest, positing these acts as a form of tactical selfharm, and acknowledge its emergence as a voice for the otherwise silenced in the public sphere. Through the use of phenomenology and feminist theory I examine the body as site for political agency, the circumstances which surround the use of the body for protest, and the reaction to tactical self-harm by the individual and the state. Using Bakhtin’s concept of dialogism, and the dialogic space I propose that by ‘hearing’ the body engaged in tactical selfharm we come closer to entering into an ethical dialogue with the otherwise silenced in our communities (locally, nationally and globally). The novel, Imperfect Offerings, explores these ideas in a fictional world, and allows me to put faces, names and lives to those who are compelled to harm their bodies to be heard. Also using Bakhtin’s framework I encourage a dialogue between the critical and creative parts of the thesis, challenging the traditional paradigm of creative PhD projects as creative work and exegesis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Objective: The study aimed to examine the difference in response rates between opt-out and opt-in participant recruitment in a population-based study of heavy-vehicle drivers involved in a police-attended crash. Methods: Two approaches to subject recruitment were implemented in two different states over a 14-week period and response rates for the two approaches (opt-out versus opt-in recruitment) were compared. Results: Based on the eligible and contactable drivers, the response rates were 54% for the optout group and 16% for the opt-in group. Conclusions and Implications: The opt-in recruitment strategy (which was a consequence of one jurisdiction’s interpretation of the national Privacy Act at the time) resulted in an insufficient and potentially biased sample for the purposes of conducting research into risk factors for heavy-vehicle crashes. Australia’s national Privacy Act 1988 has had a long history of inconsistent practices by state and territory government departments and ethical review committees. These inconsistencies can have profound effects on the validity of research, as shown through the significantly different response rates we reported in this study. It is hoped that a more unified interpretation of the Privacy Act across the states and territories, as proposed under the soon-to-be released Australian Privacy Principles will reduce the recruitment challenges outlined in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article I outline and demonstrate a synthesis of the methods developed by Lemke (1998) and Martin (2000) for analyzing evaluations in English. I demonstrate the synthesis using examples from a 1.3-million-word technology policy corpus drawn from institutions at the local, state, national, and supranational levels. Lemke's (1998) critical model is organized around the broad 'evaluative dimensions' that are deployed to evaluate propositions and proposals in English. Martin's (2000) model is organized with a more overtly systemic-functional orientation around the concept of 'encoded feeling'. In applying both these models at different times, whilst recognizing their individual usefulness and complementarity, I found specific limitations that led me to work towards a synthesis of the two approaches. I also argue for the need to consider genre, media, and institutional aspects more explicitly when claiming intertextual and heteroglossic relations as the basis for inferred evaluations. A basic assertion made in this article is that the perceived Desirability of a process, person, circumstance, or thing is identical to its 'value'. But the Desirability of anything is a socially and thus historically conditioned attribution that requires significant amounts of institutional inculcation of other 'types' of value-appropriateness, importance, beauty, power, and so on. I therefore propose a method informed by critical discourse analysis (CDA) that sees evaluation as happening on at least four interdependent levels of abstraction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper compares the performances of two different optimisation techniques for solving inverse problems; the first one deals with the Hierarchical Asynchronous Parallel Evolutionary Algorithms software (HAPEA) and the second is implemented with a game strategy named Nash-EA. The HAPEA software is based on a hierarchical topology and asynchronous parallel computation. The Nash-EA methodology is introduced as a distributed virtual game and consists of splitting the wing design variables - aerofoil sections - supervised by players optimising their own strategy. The HAPEA and Nash-EA software methodologies are applied to a single objective aerodynamic ONERA M6 wing reconstruction. Numerical results from the two approaches are compared in terms of the quality of model and computational expense and demonstrate the superiority of the distributed Nash-EA methodology in a parallel environment for a similar design quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Association rule mining is one technique that is widely used when querying databases, especially those that are transactional, in order to obtain useful associations or correlations among sets of items. Much work has been done focusing on efficiency, effectiveness and redundancy. There has also been a focusing on the quality of rules from single level datasets with many interestingness measures proposed. However, with multi-level datasets now being common there is a lack of interestingness measures developed for multi-level and cross-level rules. Single level measures do not take into account the hierarchy found in a multi-level dataset. This leaves the Support-Confidence approach,which does not consider the hierarchy anyway and has other drawbacks, as one of the few measures available. In this paper we propose two approaches which measure multi-level association rules to help evaluate their interestingness. These measures of diversity and peculiarity can be used to help identify those rules from multi-level datasets that are potentially useful.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Reducing rates of healthcare acquired infection has been identified by the Australian Commission on Safety and Quality in Health Care as a national priority. One of the goals is the prevention of central venous catheter-related bloodstream infection (CR-BSI). At least 3,500 cases of CR-BSI occur annually in Australian hospitals, resulting in unnecessary deaths and costs to the healthcare system between $25.7 and $95.3 million. Two approaches to preventing these infections have been proposed: use of antimicrobial catheters (A-CVCs); or a catheter care and management ‘bundle’. Given finite healthcare budgets, decisions about the optimal infection control policy require consideration of the effectiveness and value for money of each approach. Objectives: The aim of this research is to use a rational economic framework to inform efficient infection control policy relating to the prevention of CR-BSI in the intensive care unit. It addresses three questions relating to decision-making in this area: 1. Is additional investment in activities aimed at preventing CR-BSI an efficient use of healthcare resources? 2. What is the optimal infection control strategy from amongst the two major approaches that have been proposed to prevent CR-BSI? 3. What uncertainty is there in this decision and can a research agenda to improve decision-making in this area be identified? Methods: A decision analytic model-based economic evaluation was undertaken to identify an efficient approach to preventing CR-BSI in Queensland Health intensive care units. A Markov model was developed in conjunction with a panel of clinical experts which described the epidemiology and prognosis of CR-BSI. The model was parameterised using data systematically identified from the published literature and extracted from routine databases. The quality of data used in the model and its validity to clinical experts and sensitivity to modelling assumptions was assessed. Two separate economic evaluations were conducted. The first evaluation compared all commercially available A-CVCs alongside uncoated catheters to identify which was cost-effective for routine use. The uncertainty in this decision was estimated along with the value of collecting further information to inform the decision. The second evaluation compared the use of A-CVCs to a catheter care bundle. We were unable to estimate the cost of the bundle because it is unclear what the full resource requirements are for its implementation, and what the value of these would be in an Australian context. As such we undertook a threshold analysis to identify the cost and effectiveness thresholds at which a hypothetical bundle would dominate the use of A-CVCs under various clinical scenarios. Results: In the first evaluation of A-CVCs, the findings from the baseline analysis, in which uncertainty is not considered, show that the use of any of the four A-CVCs will result in health gains accompanied by cost-savings. The MR catheters dominate the baseline analysis generating 1.64 QALYs and cost-savings of $130,289 per 1.000 catheters. With uncertainty, and based on current information, the MR catheters remain the optimal decision and return the highest average net monetary benefits ($948 per catheter) relative to all other catheter types. This conclusion was robust to all scenarios tested, however, the probability of error in this conclusion is high, 62% in the baseline scenario. Using a value of $40,000 per QALY, the expected value of perfect information associated with this decision is $7.3 million. An analysis of the expected value of perfect information for individual parameters suggests that it may be worthwhile for future research to focus on providing better estimates of the mortality attributable to CR-BSI and the effectiveness of both SPC and CH/SSD (int/ext) catheters. In the second evaluation of the catheter care bundle relative to A-CVCs, the results which do not consider uncertainty indicate that a bundle must achieve a relative risk of CR-BSI of at least 0.45 to be cost-effective relative to MR catheters. If the bundle can reduce rates of infection from 2.5% to effectively zero, it is cost-effective relative to MR catheters if national implementation costs are less than $2.6 million ($56,610 per ICU). If the bundle can achieve a relative risk of 0.34 (comparable to that reported in the literature) it is cost-effective, relative to MR catheters, if costs over an 18 month period are below $613,795 nationally ($13,343 per ICU). Once uncertainty in the decision is considered, the cost threshold for the bundle increases to $2.2 million. Therefore, if each of the 46 Level III ICUs could implement an 18 month catheter care bundle for less than $47,826 each, this approach would be cost effective relative to A-CVCs. However, the uncertainty is substantial and the probability of error in concluding that the bundle is the cost-effective approach at a cost of $2.2 million is 89%. Conclusions: This work highlights that infection control to prevent CR-BSI is an efficient use of healthcare resources in the Australian context. If there is no further investment in infection control, an opportunity cost is incurred, which is the potential for a more efficient healthcare system. Minocycline/rifampicin catheters are the optimal choice of antimicrobial catheter for routine use in Australian Level III ICUs, however, if a catheter care bundle implemented in Australia was as effective as those used in the large studies in the United States it would be preferred over the catheters if it was able to be implemented for less than $47,826 per Level III ICU. Uncertainty is very high in this decision and arises from multiple sources. There are likely greater costs to this uncertainty for A-CVCs, which may carry hidden costs, than there are for a catheter care bundle, which is more likely to provide indirect benefits to clinical practice and patient safety. Research into the mortality attributable to CR-BSI, the effectiveness of SPC and CH/SSD (int/ext) catheters and the cost and effectiveness of a catheter care bundle in Australia should be prioritised to reduce uncertainty in this decision. This thesis provides the economic evidence to inform one area of infection control, but there are many other infection control decisions for which information about the cost-effectiveness of competing interventions does not exist. This work highlights some of the challenges and benefits to generating and using economic evidence for infection control decision-making and provides support for commissioning more research into the cost-effectiveness of infection control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The effect of patient education on reducing stroke has had mixed effects, raising questions about how to achieve optimal benefit. Because past evaluations have typically lacked an appropriate theoretical base, the design of past research may have missed important effects. --------- Method: This study used a social cognitive framework to identify variables that might change in response to education. A mixed design was used to evaluate two approaches to an intervention, both of which included education. Fifty seniors completed a measure of stroke knowledge and beliefs twice: before and after an intervention that was either standard (educational brochure plus activities that were not about stroke) or enhanced (educational brochure plus activities designed to enhance beliefs about stroke). Outcome measures were health beliefs, intention to exercise to reduce stroke, and stroke knowledge. --------- Results: Selected beliefs changed significantly over time but not differentially across conditions. Beliefs that changed were (a) perceived susceptibility to stroke and (b) perceived benefit of exercise to reduce risk. Benefit beliefs, in particular, were strongly and positively associated with intention to exercise. -------- Conclusion: Findings suggest that basic approaches to patient education may influence health beliefs. More effective stroke prevention programs may result from continued consideration of the role of health beliefs in such programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Asset management in local government is an emerging discipline and over a decade has become a crucial aspect towards a more efficient and effective organisation. One crucial feature in the public asset management is performance measurement toward the public real estates. This measurement critically at the important component of public wealth and seeks to apply a standard of economic efficiency and effective organisational management especially in such global financial crisis condition. This paper aims to identify global economic crisis effect and proposes alternative solution for local governments to softening the impact of the crisis to the local governments organisation. This study found that the most suitable solution for local government to solve the global economic crisis in Indonesia is application of performance measurement in its asset management. Thus, it is important to develop performance measurement system in local government asset management process. This study provides suggestions from published documents and literatures. The paper also discusses the elements of public real estate performance measurement. The measurement of performance has become an essential component of the strategic thinking of assets owners and managers. Without having a formal measurement system for performance, it is difficult to plan, control and improve local government real estate management system. A close look at best practices in public sectors reveals that in most cases these practices were transferred from private sector reals estate management under the direction of real estate experts retained by government. One of the most significant advances in government property performance measurement resulted from recognition that the methodology used by private sector, non real estate corporations for managing their real property offered a valuable prototype for local governments. In general, there are two approaches most frequently used to measure performance of public organisations. Those are subjective and objective measures. Finally, findings from this study provides useful input for the local government policy makers, scholars and asset management practitioners to establish a public real estate performance measurement system toward more efficient and effective local governments in managing their assets as well as increasing public services quality in order to soften the impact of global financial crisis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the main aims in artificial intelligent system is to develop robust and efficient optimisation methods for Multi-Objective (MO) and Multidisciplinary Design (MDO) design problems. The paper investigates two different optimisation techniques for multi-objective design optimisation problems. The first optimisation method is a Non-Dominated Sorting Genetic Algorithm II (NSGA-II). The second method combines the concepts of Nash-equilibrium and Pareto optimality with Multi-Objective Evolutionary Algorithms (MOEAs) which is denoted as Hybrid-Game. Numerical results from the two approaches are compared in terms of the quality of model and computational expense. The benefit of using the distributed hybrid game methodology for multi-objective design problems is demonstrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present a ∑GIi/D/1/∞ queue with heterogeneous input/output slot times. This queueing model can be regarded as an extension of the ordinary GI/D/1/∞ model. For this ∑GIi/D/1/∞ queue, we assume that several input streams arrive at the system according to different slot times. In other words, there are different slot times for different input/output processes in the queueing model. The queueing model can therefore be used for an ATM multiplexer with heterogeneous input/output link capacities. Several cases of the queueing model are discussed to reflect different relationships among the input/output link capacities of an ATM multiplexer. In the queueing analysis, two approaches: the Markov model and the probability generating function technique, are adopted to develop the queue length distributions observed at different epochs. This model is particularly useful in the performance analysis of ATM multiplexers with heterogeneous input/output link capacities.