881 resultados para Regression-based decomposition.
Resumo:
The examination of Workplace Aggression as a global construct conceptualization has gained considerable attention over the past few years as organizations work to better understand and address the occurrence and consequences of this challenging construct. The purpose of this dissertation is to build on previous efforts to validate the appropriateness and usefulness of a global conceptualization of the workplace aggression construct. This dissertation has been broken up into two parts: Part 1 utilized a Confirmatory Factor Analysis approach in order to assess the existence of workplace aggression as a global construct; Part 2 utilized a series of correlational analyses to examine the relationship between a selection of commonly experienced individual strain based outcomes and the global construct conceptualization assessed in Part 1. Participants were a diverse sample of 219 working individuals from Amazon’s Mechanical Turk participant pool. Results of Part 1 did not show support for a one-factor global construct conceptualization of the workplace aggression construct. However, support was shown for a higher-order five-factor model of the construct, suggesting that it may be possible to conceptualize workplace aggression as an overarching construct that is made up of separate workplace aggression constructs. Results of Part 2 showed support for the relationships between an existing global construct workplace aggression conceptualization and a series of strain-based outcomes. Utilizing correlational analyses, additional post-hoc analyses showed that individual factors such as emotional intelligence and personality are related to the experience of workplace aggression. Further, utilizing moderated regression analysis, the results demonstrated that individuals experiencing high levels of workplace aggression reported higher job satisfaction when they felt strongly that the aggressive act was highly visible, and similarly, when they felt that there was a clear intent to cause harm. Overall, the findings of this dissertation do support the need for a simplification of its current state of measurement. Future research should continue to examine workplace aggression in an effort to shed additional light on the structure and usefulness of this complex construct.
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
The examination of Workplace Aggression as a global construct conceptualization has gained considerable attention over the past few years as organizations work to better understand and address the occurrence and consequences of this challenging construct. The purpose of this dissertation is to build on previous efforts to validate the appropriateness and usefulness of a global conceptualization of the workplace aggression construct. This dissertation has been broken up into two parts: Part 1 utilized a Confirmatory Factor Analysis approach in order to assess the existence of workplace aggression as a global construct; Part 2 utilized a series of correlational analyses to examine the relationship between a selection of commonly experienced individual strain based outcomes and the global construct conceptualization assessed in Part 1. Participants were a diverse sample of 219 working individuals from Amazon’s Mechanical Turk participant pool. Results of Part 1 did not show support for a one-factor global construct conceptualization of the workplace aggression construct. However, support was shown for a higher-order five-factor model of the construct, suggesting that it may be possible to conceptualize workplace aggression as an overarching construct that is made up of separate workplace aggression constructs. Results of Part 2 showed support for the relationships between an existing global construct workplace aggression conceptualization and a series of strain-based outcomes. Utilizing correlational analyses, additional post-hoc analyses showed that individual factors such as emotional intelligence and personality are related to the experience of workplace aggression. Further, utilizing moderated regression analysis, the results demonstrated that individuals experiencing high levels of workplace aggression reported higher job satisfaction when they felt strongly that the aggressive act was highly visible, and similarly, when they felt that there was a clear intent to cause harm. Overall, the findings of this dissertation do support the need for a simplification of its current state of measurement. Future research should continue to examine workplace aggression in an effort to shed additional light on the structure and usefulness of this complex construct.
Resumo:
To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems – stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.
Resumo:
Product quality planning is a fundamental part of quality assurance in manufacturing. It is composed of the distribution of quality aims over each phase in product development and the deployment of quality operations and resources to accomplish these aims. This paper proposes a quality planning methodology based on risk assessment and the planning tasks of product development are translated into evaluation of risk priorities. Firstly, a comprehensive model for quality planning is developed to address the deficiencies of traditional quality function deployment (QFD) based quality planning. Secondly, a novel failure knowledge base (FKB) based method is discussed. Then a mathematical method and algorithm of risk assessment is presented for target decomposition, measure selection, and sequence optimization. Finally, the proposed methodology has been implemented in a web based prototype software system, QQ-Planning, to solve the problem of quality planning regarding the distribution of quality targets and the deployment of quality resources, in such a way that the product requirements are satisfied and the enterprise resources are highly utilized. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Over the last three decades, there has been a precipitous rise in curiosity regarding the clinical use of mindfulness meditation for the self-management of a broad range of chronic health conditions. Despite the ever-growing body of evidence supporting the use of mindfulness-based therapies for both medical and psychological concerns, data on the active ingredients of these mind-body interventions are relatively scarce. Regular engagement in formal mindfulness practice is considered by many to be requisite for generating therapeutic change; however, previous investigations of at-home practice in MBIs have produced mixed results. The equivocal nature of these findings has been attributed to significant methodological limitations, including the lack of standardized, systematic practice monitoring tools, and a singular focus on practice time, with little attention paid to the nature and quality of one’s practice. The present study used a prospective, observational design to assess the effects of home-based practice on dispositional mindfulness, self-compassion, and psychological functioning in twenty-eight people enrolled in an MBSR or MBCT program. To address some of the aforementioned limitations, the present study collected detailed weekly accounts of participants’ home-based practice engagement, including information about practice time (i.e., frequency and duration), exercise type, perceived effort and barriers to participation, and practice quality. Hierarchical multiple regression was used to examine the relative contribution of practice time and practice quality on treatment outcomes, and to explore possible predictors of adherence to at-home practice recommendations. As anticipated, practice quality and perceived effort improved with time; however, rather unexpectedly, practice quality was not a significant predictor of treatment-related improvements in psychological health. Home practice engagement, however, was predictive of change in dispositional mindfulness, in the expected direction. Results of our secondary analyses demonstrated that employment status was predictive of home practice engagement, with those who were unemployed completing more at-home practice on average. Mindfulness self-efficacy at baseline and previous experience with meditation or other contemplative practices were independently predictive of mean practice quality. The results of this study suggest that home practice helps generate meaningful change in dispositional mindfulness, which is purportedly a key mechanism of action in mindfulness-based interventions.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
Previously developed models for predicting absolute risk of invasive epithelial ovarian cancer have included a limited number of risk factors and have had low discriminatory power (area under the receiver operating characteristic curve (AUC) < 0.60). Because of this, we developed and internally validated a relative risk prediction model that incorporates 17 established epidemiologic risk factors and 17 genome-wide significant single nucleotide polymorphisms (SNPs) using data from 11 case-control studies in the United States (5,793 cases; 9,512 controls) from the Ovarian Cancer Association Consortium (data accrued from 1992 to 2010). We developed a hierarchical logistic regression model for predicting case-control status that included imputation of missing data. We randomly divided the data into an 80% training sample and used the remaining 20% for model evaluation. The AUC for the full model was 0.664. A reduced model without SNPs performed similarly (AUC = 0.649). Both models performed better than a baseline model that included age and study site only (AUC = 0.563). The best predictive power was obtained in the full model among women younger than 50 years of age (AUC = 0.714); however, the addition of SNPs increased the AUC the most for women older than 50 years of age (AUC = 0.638 vs. 0.616). Adapting this improved model to estimate absolute risk and evaluating it in prospective data sets is warranted.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
This material is based upon work supported by the National Science Foundation through the Florida Coastal Everglades Long-Term Ecological Research program under Cooperative Agreements #DBI-0620409 and #DEB-9910514. This image is made available for non-commercial or educational use only.
Resumo:
Peat and net carbon accumulation rates in two sub-arctic peat plateaus of west-central Canada have been studied through geochemical analyses and accelerator mass spectrometry (AMS) radiocarbon dating. The peatland sites started to develop around 6600-5900 cal. yr BP and the peat plateau stages are characterized by Sphagnum fuscum peat alternating with rootlet layers. The long-term peat and net carbon accumulation rates for both profiles are 0.30-0.31 mm/yr and 12.5-12.7 gC/m**2/yr, respectively. These values reflect very slow peat accumulation (0.04-0.09 mm/yr) and net carbon accumulation (3.7-5.2 gC/m**2/yr) in the top rootlet layers. Extensive AMS radiocarbon dating of one profile shows that accumulation rates are variable depending on peat plateau stage. Peat accumulation rates are up to six times higher and net carbon accumulation rates up to four times higher in S. fuscum than in rootlet stages. Local fires represented by charcoal remains in some of the rootlet layers result in very low accumulation rates. High C/N ratios throughout most of the peat profiles suggest low degrees of decomposition due to stable permafrost conditions. Hence, original peat accretion has remained largely unaltered, except in the initial stages of peatland development when permafrost was not yet present.
Resumo:
A scenario-based two-stage stochastic programming model for gas production network planning under uncertainty is usually a large-scale nonconvex mixed-integer nonlinear programme (MINLP), which can be efficiently solved to global optimality with nonconvex generalized Benders decomposition (NGBD). This paper is concerned with the parallelization of NGBD to exploit multiple available computing resources. Three parallelization strategies are proposed, namely, naive scenario parallelization, adaptive scenario parallelization, and adaptive scenario and bounding parallelization. Case study of two industrial natural gas production network planning problems shows that, while the NGBD without parallelization is already faster than a state-of-the-art global optimization solver by an order of magnitude, the parallelization can improve the efficiency by several times on computers with multicore processors. The adaptive scenario and bounding parallelization achieves the best overall performance among the three proposed parallelization strategies.