820 resultados para Reliability and safeties


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data fusion can be defined as the process of combining data or information for estimating the state of an entity. Data fusion is a multidisciplinary field that has several benefits, such as enhancing the confidence, improving reliability, and reducing ambiguity of measurements for estimating the state of entities in engineering systems. It can also enhance completeness of fused data that may be required for estimating the state of engineering systems. Data fusion has been applied to different fields, such as robotics, automation, and intelligent systems. This paper reviews some examples of recent applications of data fusion in civil engineering and presents some of the potential benefits of using data fusion in civil engineering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Level II reliability theory provides an approximate method whereby the reliability of a complex engineering structure which has multiple strength and loading variables may be estimated. This technique has been applied previously to both civil and offshore structures with considerable success. The aim of the present work is to assess the applicability of the method for aircraft structures, and to this end landing gear design is considered in detail. It is found that the technique yields useful information regarding the structural reliability, and further it enables the critical design parameters to be identified.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel fiber coated with novel sol-gel (5,11,17,23-tetra-tert-butyl-25,27-dihydroxy-26,28-diglycidyloxycalix[4]arene/hydroxy-terminated silicone oil; diglycidyloxy-C[4]/OH-TSO) was prepared for use with headspace solid-phase microextraction (HS-SPME) combined with gas chromatography (GC) and electron capture detection (ECD), which was applied in order to determine nine chlorobenzenes in soil matrices. Due to the improved fiber preparation, which increases the percentage of calixarene in the coating, the new calixarene fiber exhibits very high extraction selectivity and sensitivity to chlorine-substituted compounds. Various parameters affecting the extraction efficiency were optimized in order to maximize the sensitivity during the chlorobenzene analysis. Interferences from different soil matrices with different characteristics were investigated, and the amount extracted was strongly influenced by the matrix. Therefore, a standard addition protocol was performed on the real soil samples. The linear ranges of detection for the chlorobenzenes tested covered three orders of magnitude, and correlation coefficients > 0.9976 and relative standard deviations (RSD) < 8% were observed. The detection limits were found at sub-ng/g of soil levels, which were about an order of magnitude lower than those given by the commercial poly(dimethylsiloxane) (PDMS) coating for most of the compounds. The recoveries ranged from 64 to 109.6% for each analyte in the real kaleyard soil matrix when different concentration levels were determined over the linear range, which confirmed the reliability and feasibility of the HS-SPME/GC-ECD approach using the fiber coated with diglycidyloxy-C[4]/OH-TSO for the ultratrace analysis of chlorobenzenes in complex matrices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

By using modern techniques of isotope dilution, high-resolution gas chromatography/high-resolution mass spectrometry and multiple ions detection, an effective cleanup, qualitative and quantitative method was developed for polychlorinated dibenzo-p-dioxins/furans (PCDD/F) and polychlorinated biphenyls analysis. Based on the chromatographic relative retentions of PCDD/F, a software was established for automatic peak recognition of all the isomers from tetra- to octachlorine PCDD/F. It ensured good reliability and accuracy of the analytical data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The chemical index of alteration has been used widely for reconstruction of the palaeoclimate. However, the mechanisms and environmental factors controlling the chemical index of alteration of sediments are not yet fully understood. In this study, autocorrelations of the chemical index of alteration in nine sedimentary profiles, from both the land and the sea, spanning different geological times, are discussed. The sediments of these profiles have different origins (dust, fluvial or ocean sediments) and are from various climate situations and sedimentary environments. Autocorrelations of chemical index of alteration series are ubiquitously evident in all profiles. It is suggested here that autocorrelations may be caused by post-depositional changes such as persistent weathering and diagenesis. As a result, the chemical index of alteration may not reflect climatic conditions during the time of sediment deposition. This study strongly recommends the confirmation of the reliability and veracity of the chemical index of alteration before it is adopted to evaluate the weathering degree of parent rocks and to reconstruct the past climate. Significant autocorrelations in loess profiles were specifically observed, suggesting that the existing understanding of loess deposition in terms of climate conditions requires re-examination, and that previous reconstructions of rapid climate changes (for example, in centennial-millennial scales) should be treated with caution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article considers the arguments that have been made in defence of social media screening as well as issues that arise and may effectively erode the reliability and utility of such data for employers. First, the authors consider existing legal frameworks and guidelines that exist in the UK and the USA, as well as the subsequent ethical concerns that arise when employers access and use social networking content for employment purposes. Second, several arguments in favour of the use of social networking content are made, each of which is considered from several angles, including concerns about impression management, bias and discrimination, data protection and security. Ultimately, the current state of knowledge does not provide a definite answer as to whether information from social networks is helpful in recruitment and selection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study sets out to investigate the psychology of immersion and the immersive response of individuals in relation to video and computer games. Initially, an exhaustive review of literature is presented, including research into games, player demographics, personality and identity. Play in traditional psychology is also reviewed, as well as previous research into immersion and attempts to define and measure this construct. An online qualitative study was carried out (N=38), and data was analysed using content analysis. A definition of immersion emerged, as well as a classification of two separate types of immersion, namely, vicarious immersion and visceral immersion. A survey study (N=217) verified the discrete nature of these categories and rejected the null hypothesis that there was no difference between individuals' interpretations of vicarious and visceral immersion. The primary aim of this research was to create a quantitative instrument which measures the immersive response as experienced by the player in a single game session. The IMX Questionnaire was developed using data from the initial qualitative study and quantitative survey. Exploratory Factor Analysis was carried out on data from 300 participants for the IMX Version 1, and Confirmatory Factor Analysis was conducted on data from 380 participants on the IMX Version 2. IMX Version 3 was developed from the results of these analyses. This questionnaire was found to have high internal consistency reliability and validity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The measurement of users’ attitudes towards and confidence with using the Internet is an important yet poorly researched topic. Previous research has encountered issues that serve to obfuscate rather than clarify. Such issues include a lack of distinction between the terms ‘attitude’ and ‘self-efficacy’, the absence of a theoretical framework to measure each concept, and failure to follow well-established techniques for the measurement of both attitude and self-efficacy. Thus, the primary aim of this research was to develop two statistically reliable scales which independently measure attitudes towards the Internet and Internet self-efficacy. This research addressed the outlined issues by applying appropriate theoretical frameworks to each of the constructs under investigation. First, the well-known three component (affect, behaviour, cognition) model of attitudes was applied to previous Internet attitude statements. The scale was distributed to four large samples of participants. Exploratory factor analyses revealed four underlying factors in the scale: Internet Affect, Internet Exhilaration, Social Benefit of the Internet and Internet Detriment. The final scale contains 21 items, demonstrates excellent reliability and achieved excellent model fit in the confirmatory factor analysis. Second, Bandura’s (1997) model of self-efficacy was followed to develop a reliable measure of Internet self-efficacy. Data collected as part of this research suggests that there are ten main activities which individuals can carry out on the Internet. Preliminary analyses suggested that self-efficacy is confounded with previous experience; thus, individuals were invited to indicate how frequently they performed the listed Internet tasks in addition to rating their feelings of self-efficacy for each task. The scale was distributed to a sample of 841 participants. Results from the analyses suggest that the more frequently an individual performs an activity on the Internet, the higher their self-efficacy score for that activity. This suggests that frequency of use ought to be taken into account in individual’s self-efficacy scores to obtain a ‘true’ self-efficacy score for the individual. Thus, a formula was devised to incorporate participants’ previous experience of Internet tasks in their Internet self-efficacy scores. This formula was then used to obtain an overall Internet self-efficacy score for participants. Following the development of both scales, gender and age differences were explored in Internet attitudes and Internet self-efficacy scores. The analyses indicated that there were no gender differences between groups for Internet attitude or Internet self-efficacy scores. However, age group differences were identified for both attitudes and self-efficacy. Individuals aged 25-34 years achieved the highest scores on both the Internet attitude and Internet self-efficacy measures. Internet attitude and self-efficacy scores tended to decrease with age with older participants achieving lower scores on both measures than younger participants. It was also found that the more exposure individuals had to the Internet, the higher their Internet attitude and Internet self-efficacy scores. Examination of the relationship between attitude and self-efficacy found a significantly positive relationship between the two measures suggesting that the two constructs are related. Implication of such findings and directions for future research are outlined in detail in the Discussion section of this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Spirituality is fundamental to all human beings, existing within a person, and developing until death. This research sought to operationalise spirituality in a sample of individuals with chronic illness. A review of the conceptual literature identified three dimensions of spirituality: connectedness, transcendence, and meaning in life. A review of the empirical literature identified one instrument that measures the three dimensions together. Yet, recent appraisals of this instrument highlighted issues with item formulation and limited evidence of reliability and validity. Aim: The aim of this research was to develop a theoretically-grounded instrument to measure spirituality – the Spirituality Instrument-27 (SpI-27). A secondary aim was to psychometrically evaluate this instrument in a sample of individuals with chronic illness (n=249). Methods: A two-phase design was adopted. Phase one consisted of the development of the SpI-27 based on item generation from a concept analysis, a literature review, and an instrument appraisal. The second phase established the psychometric properties of the instrument and included: a qualitative descriptive design to establish content validity; a pilot study to evaluate the mode of administration; and a descriptive correlational design to assess the instrument’s reliability and validity. Data were analysed using SPSS (Version 18). Results: Results of exploratory factor analysis concluded a final five-factor solution with 27 items. These five factors were labelled: Connectedness with Others, Self-Transcendence, Self-Cognisance, Conservationism, and Connectedness with a Higher Power. Cronbach’s alpha coefficients ranged from 0.823 to 0.911 for the five factors, and 0.904 for the overall scale, indicating high internal consistency. Paired-sample t-tests, intra-class correlations, and weighted kappa values supported the temporal stability of the instrument over 2 weeks. A significant positive correlation was found between the SpI-27 and the Spirituality Index of Well-Being, providing evidence for convergent validity. Conclusion: This research addresses a call for a theoretically-grounded instrument to measure spirituality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Shame has been shown to predict sexual HIV transmission risk behavior, medication non-adherence, symptomatic HIV or AIDS, and symptoms of depression and PTSD. However, there remains a dearth of tools to measure the specific constructs of HIV-related and sexual abuse-related shame. To ameliorate this gap, we present a 31-item measure that assesses HIV and sexual abuse-related shame, and the impact of shame on HIV-related health behaviors. A diverse sample of 271 HIV-positive men and women who were sexually abused as children completed the HIV and Abuse Related Shame Inventory (HARSI) among other measures. An exploratory factor analysis supported the retention of three-factors, explaining 56.7% of the sample variance. These internally consistent factors showed good test-retest reliability, and sound convergent and divergent validity using eight well-established HIV specific and general psychosocial criterion measures. Unlike stigma or discrimination, shame is potentially alterable through individually-focused interventions, making the measurement of shame clinically meaningful.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: New product design challenges, related to customer needs, product usage and environments, face companies when they expand their product offerings to new markets; Some of the main challenges are: the lack of quantifiable information, product experience and field data. Designing reliable products under such challenges requires flexible reliability assessment processes that can capture the variables and parameters affecting the product overall reliability and allow different design scenarios to be assessed. These challenges also suggest a mechanistic (Physics of Failure-PoF) reliability approach would be a suitable framework to be used for reliability assessment. Mechanistic Reliability recognizes the primary factors affecting design reliability. This research views the designed entity as a “system of components required to deliver specific operations”; it addresses the above mentioned challenges by; Firstly: developing a design synthesis that allows a descriptive operations/ system components relationships to be realized; Secondly: developing component’s mathematical damage models that evaluate components Time to Failure (TTF) distributions given: 1) the descriptive design model, 2) customer usage knowledge and 3) design material properties; Lastly: developing a procedure that integrates components’ damage models to assess the mechanical system’s reliability over time. Analytical and numerical simulation models were developed to capture the relationships between operations and components, the mathematical damage models and the assessment of system’s reliability. The process was able to affect the design form during the conceptual design phase by providing stress goals to meet component’s reliability target. The process was able to numerically assess the reliability of a system based on component’s mechanistic TTF distributions, besides affecting the design of the component during the design embodiment phase. The process was used to assess the reliability of an internal combustion engine manifold during design phase; results were compared to reliability field data and found to produce conservative reliability results. The research focused on mechanical systems, affected by independent mechanical failure mechanisms that are influenced by the design process. Assembly and manufacturing stresses and defects’ influences are not a focus of this research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The need for nuclear data far from the valley of stability, for applications such as nuclear as- trophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in funda- mental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological mod- els, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Products manufactured by the electronics sector are having a major impact in telecommunications, transportation space applications, biomedical applications, consumer products, intelligent hand held devices, and of course,the computer. Demands from end-users in terms of greater product functionality, adoption of environmentally friendly materials, and further miniaturization continually pose several challenges to electronics companies. In the context of electronic product design and manufacture, virtual prototying software tools are allowing companies to dramatically reduce the number of phsysical prototypes and design iterations required in product development and hence reduce costs and time to market. This paper details of the trends in these technolgies and provides an example of their use for flip-chip assembly technology.