961 resultados para continuous variables
Resumo:
This paper proposes strategies to reduce the number of variables and the combinatorial search space of the multistage transmission expansion planning problem (TEP). The concept of the binary numeral system (BNS) is used to reduce the number of binary and continuous variables related to the candidate transmission lines and network constraints that are connected with them. The construction phase of greedy randomized adaptive search procedure (GRASP-CP) and additional constraints, obtained from power flow equilibrium in an electric power system are employed for more reduction in search space. The multistage TEP problem is modeled like a mixed binary linear programming problem and solved using a commercial solver with a low computational time. The results of one test system and two real systems are presented in order to show the efficiency of the proposed solution technique. © 1969-2012 IEEE.
Resumo:
Ethnic violence appears to be the major source of violence in the world. Ethnic hostilities are potentially all-pervasive because most countries in the world are multi-ethnic. Public health's focus on violence documents its increasing role in this issue.^ The present study is based on a secondary analysis of a dataset of responses by 272 individuals from four ethnic groups (Anglo, African, Mexican, and Vietnamese Americans) who answered questions regarding variables related to ethnic violence from a general questionnaire which was distributed to ethnically diverse purposive, nonprobability, self-selected groups of individuals in Houston, Texas, in 1993.^ One goal was psychometric: learning about issues in analysis of datasets with modest numbers, comparison of two approaches to dealing with missing observations not missing at random (conducting analysis on two datasets), transformation analysis of continuous variables for logistic regression, and logistic regression diagnostics.^ Regarding the psychometric goal, it was concluded that measurement model analysis was not possible with a relatively small dataset with nonnormal variables, such as Likert-scaled variables; therefore, exploratory factor analysis was used. The two approaches to dealing with missing values resulted in comparable findings. Transformation analysis suggested that the continuous variables were in the correct scale, and diagnostics that the model fit was adequate.^ The substantive portion of the analysis included the testing of four hypotheses. Hypothesis One proposed that attitudes/efficacy regarding alternative approaches to resolving grievances from the general questionnaire represented underlying factors: nonpunitive social norms and strategies for addressing grievances--using the political system, organizing protests, using the system to punish offenders, and personal mediation. Evidence was found to support all but one factor, nonpunitive social norms.^ Hypothesis Two proposed that the factor variables and the other independent variables--jail, grievance, male, young, and membership in a particular ethnic group--were associated with (non)violence. Jail, grievance, and not using the political system to address grievances were associated with a greater likelihood of intergroup violence.^ No evidence was found to support Hypotheses Three and Four, which proposed that grievance and ethnic group membership would interact with other variables (i.e., age, gender, etc.) to produce variant levels of subgroup (non)violence.^ The generalizability of the results of this study are constrained by the purposive self-selected nature of the sample and small sample size (n = 272).^ Suggestions for future research include incorporating other possible variables or factors predictive of intergroup violence in models of the kind tested here, and the development and evaluation of interventions that promote electoral and nonelectoral political participation as means of reducing interethnic conflict. ^
Resumo:
The Einstein-Podolsky-Rosen paradox and quantum entanglement are at the heart of quantum mechanics. Here we show that single-pass traveling-wave second-harmonic generation can be used to demonstrate both entanglement and the paradox with continuous variables that are analogous to the position and momentum of the original proposal.
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia
Resumo:
Monitoring foodservice satisfaction is a risk management strategy for malnutrition in the acute care sector, as low satisfaction may be associated with poor intake. This study aimed to investigate the relationship between age and foodservice satisfaction in the private acute care setting. Patient satisfaction was assessed using a validated tool, the Acute Care Hospital Foodservice Patient Satisfaction Questionnaire for data collected 2008–2010 (n = 779) at a private hospital, Brisbane. Age was grouped into three categories; <50 years, 51–70 years and >70 years. Fisher’s exact test assessed independence of categorical responses and age group; ANOVA or Kruskal–Wallis test was used for continuous variables. Dichotomised responses were analysed using logistic regression and odds ratios (95% confidence interval, p < 0.05). Overall foodservice satisfaction (5 point scale) was high (≥4 out of 5) and was independent of age group (p = 0.377). There was an increasing trend with age in mean satisfaction scores for individual dimensions of foodservice; food quality (p < 0.001), meal service quality (p < 0.001), staff service issues (p < 0.001) and physical environment (p < 0.001). A preference for being able to choose different sized meals (59.8% > 70 years vs 40.6% ≤50 years; p < 0.001) and response to ‘the foods are just the right temperature’ (55.3% >70 years vs 35.9% ≤50 years; p < 0.001) was dependent on age. For the food quality dimension, based on dichotomised responses (satisfied or not), the odds of satisfaction was higher for >70 years (OR = 5.0, 95% CI: 1.8–13.8; <50 years referent). These results suggest that dimensions of foodservice satisfaction are associated with age and can assist foodservices to meet varying generational expectations of clients.
Resumo:
Objective: Effective management of multi-resistant organisms is an important issue for hospitals both in Australia and overseas. This study investigates the utility of using Bayesian Network (BN) analysis to examine relationships between risk factors and colonization with Vancomycin Resistant Enterococcus (VRE). Design: Bayesian Network Analysis was performed using infection control data collected over a period of 36 months (2008-2010). Setting: Princess Alexandra Hospital (PAH), Brisbane. Outcome of interest: Number of new VRE Isolates Methods: A BN is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). BN enables multiple interacting agents to be studied simultaneously. The initial BN model was constructed based on the infectious disease physician‟s expert knowledge and current literature. Continuous variables were dichotomised by using third quartile values of year 2008 data. BN was used to examine the probabilistic relationships between VRE isolates and risk factors; and to establish which factors were associated with an increased probability of a high number of VRE isolates. Software: Netica (version 4.16). Results: Preliminary analysis revealed that VRE transmission and VRE prevalence were the most influential factors in predicting a high number of VRE isolates. Interestingly, several factors (hand hygiene and cleaning) known through literature to be associated with VRE prevalence, did not appear to be as influential as expected in this BN model. Conclusions: This preliminary work has shown that Bayesian Network Analysis is a useful tool in examining clinical infection prevention issues, where there is often a web of factors that influence outcomes. This BN model can be restructured easily enabling various combinations of agents to be studied.
Resumo:
OBJECTIVE: The objective of this study was to describe the distribution of conjunctival ultraviolet autofluorescence (UVAF) in an adult population. METHODS: We conducted a cross-sectional, population-based study in the genetic isolate of Norfolk Island, South Pacific Ocean. In all, 641 people, aged 15 to 89 years, were recruited. UVAF and standard (control) photographs were taken of the nasal and temporal interpalpebral regions bilaterally. Differences between the groups for non-normally distributed continuous variables were assessed using the Wilcoxon-Mann-Whitney ranksum test. Trends across categories were assessed using Cuzick's non-parametric test for trend or Kendall's rank correlation τ. RESULTS: Conjunctival UVAF is a non-parametric trait with a positively skewed distribution. Median amount of conjunctival UVAF per person (sum of four measurements; right nasal/temporal and left nasal/temporal) was 28.2 mm(2) (interquartile range 14.5-48.2). There was an inverse, linear relationship between UVAF and advancing age (P<0.001). Males had a higher sum of UVAF compared with females (34.4 mm(2) vs 23.2 mm(2), P<0.0001). There were no statistically significant differences in area of UVAF between right and left eyes or between nasal and temporal regions. CONCLUSION: We have provided the first quantifiable estimates of conjunctival UVAF in an adult population. Further data are required to provide information about the natural history of UVAF and to characterise other potential disease associations with UVAF. UVR protective strategies should be emphasised at an early age to prevent the long-term adverse effects on health associated with excess UVR.
Resumo:
Background: Little is known about the health effects of worksite wellness programs on police department staff. Objective: To examine 1-2 year changes in health profiles of participants in the Queensland Police Service’s wellness program. Methods: Participants underwent yearly physical assessments. Health profile data collected during assessments from 2008 to 2012 were included in the analysis. Data Analysis: Repeated-measures ANOVA was used for continuous outcome variables, related-samples Wilcoxon Signed Rank test for non-normally continuous variables, and McNemar’s test for binary variables. Results: Significant changes in physical measures included decreases in waist circumference and percent body fat, and increases in cardiorespiratory fitness and flexibility (p<0.01). Changes in serum cholesterol, haemoglobin, total cholesterol ratios, HDL, LDL and Triglyceride levels were also significant (p<0.01). Conclusion: Participants’ health profiles mostly improved between cycles although most changes were not clinically significant. As this evaluation used a single-group pre-test post-test design, it provides initial indications that wellness programs can benefit staff in police departments.
Resumo:
Objective The objective of this study was to evaluate weight-related risk perception in early pregnancy and to compare this perception between women commencing pregnancy healthy weight and overweight. Study design Pregnant women (n=664) aged 29±5 (mean±s.d.) years were recruited from a metropolitan teaching hospital in Australia. A self-administered questionnaire was completed at around 16 weeks of gestation. Height measured at baseline and self-reported pre-pregnancy weight were used to calculate body mass index. Cross-sectional analysis was conducted. Differences between groups were assessed using chi-squared tests for categorical variables and t-tests or Mann–Whitney U tests for continuous variables depending on distribution. Result Excess gestational weight gain (GWG) during pregnancy was more important in leading to health problems for women or their child compared with pre-pregnancy weight. Personal risk perception for complications was low for all women, although overweight women had slightly higher scores than healthy-weight women (2.4±1.0 vs 2.9±1.0; P<0.001). All women perceived their risk for complications to be below that of an average pregnant woman. Conclusion Women should be informed of the risk associated with their pre-pregnancy weight (in the case of maternal overweight) and excess GWG. If efforts to raise risk awareness are to result in preventative action, this information needs to be accompanied by advice and appropriate support on how to reduce risk.
Resumo:
I agree with Costanza and Finkelstein (2015) that it is futile to further invest in the study of generational differences in the work context due to a lack of appropriate theory and methods. The key problem with the generations concept is that splitting continuous variables such as age or time into a few discrete units involves arbitrary cutoffs and atheoretical groupings of individuals (e.g., stating that all people born between the early 1960s and early 1980s belong to Generation X). As noted by methodologists, this procedure leads to a loss of information about individuals and reduced statistical power (MacCallum, Zhang, Preacher, & Rucker, 2002). Due to these conceptual and methodological limitations, I regard it as very difficult if not impossible to develop a “comprehensive theory of generations” (Costanza & Finkelstein, p. 20) and to rigorously examine generational differences at work in empirical studies.
Resumo:
This paper presents the design and implementation of a learning controller for the Automatic Generation Control (AGC) in power systems based on a reinforcement learning (RL) framework. In contrast to the recent RL scheme for AGC proposed by us, the present method permits handling of power system variables such as Area Control Error (ACE) and deviations from scheduled frequency and tie-line flows as continuous variables. (In the earlier scheme, these variables have to be quantized into finitely many levels). The optimal control law is arrived at in the RL framework by making use of Q-learning strategy. Since the state variables are continuous, we propose the use of Radial Basis Function (RBF) neural networks to compute the Q-values for a given input state. Since, in this application we cannot provide training data appropriate for the standard supervised learning framework, a reinforcement learning algorithm is employed to train the RBF network. We also employ a novel exploration strategy, based on a Learning Automata algorithm,for generating training samples during Q-learning. The proposed scheme, in addition to being simple to implement, inherits all the attractive features of an RL scheme such as model independent design, flexibility in control objective specification, robustness etc. Two implementations of the proposed approach are presented. Through simulation studies the attractiveness of this approach is demonstrated.
Resumo:
The aim of this technical report is to present some detailed explanations in order to help to understand and use the Message Passing Interface (MPI) parallel programming for solving several mixed integer optimization problems. We have developed a C++ experimental code that uses the IBM ILOG CPLEX optimizer within the COmputational INfrastructure for Operations Research (COIN-OR) and MPI parallel computing for solving the optimization models under UNIX-like systems. The computational experience illustrates how can we solve 44 optimization problems which are asymmetric with respect to the number of integer and continuous variables and the number of constraints. We also report a comparative with the speedup and efficiency of several strategies implemented for some available number of threads.
Resumo:
Abstract This paper presents a hybrid heuristic{triangle evolution (TE) for global optimization. It is a real coded evolutionary algorithm. As in di®erential evolution (DE), TE targets each individual in current population and attempts to replace it by a new better individual. However, the way of generating new individuals is di®erent. TE generates new individuals in a Nelder- Mead way, while the simplices used in TE is 1 or 2 dimensional. The proposed algorithm is very easy to use and e±cient for global optimization problems with continuous variables. Moreover, it requires only one (explicit) control parameter. Numerical results show that the new algorithm is comparable with DE for low dimensional problems but it outperforms DE for high dimensional problems.