898 resultados para LARGE SYSTEMS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study proceeds from a central interest in the importance of systematically evaluating operational large-scale integrated information systems (IS) in organisations. The study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2009). The track espouses programmatic research having the principles of incrementalism, tenacity, holism and generalisability through replication and extension research strategies. Track efforts have yielded the bicameral IS-Impact measurement model; the ‘impact’ half includes Organisational-Impact and Individual-Impact dimensions; the ‘quality’ half includes System-Quality and Information-Quality dimensions. Akin to Gregor’s (2006) analytic theory, the ISImpact model is conceptualised as a formative, multidimensional index and is defined as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (Gable et al., 2008, p: 381). The study adopts the IS-Impact model (Gable, et al., 2008) as its core theory base. Prior work within the IS-Impact track has been consciously constrained to Financial IS for their homogeneity. This study adopts a context-extension strategy (Berthon et al., 2002) with the aim "to further validate and extend the IS-Impact measurement model in a new context - i.e. a different IS - Human Resources (HR)". The overarching research question is: "How can the impacts of large-scale integrated HR applications be effectively and efficiently benchmarked?" This managerial question (Cooper & Emory, 1995) decomposes into two more specific research questions – In the new HR context: (RQ1): "Is the IS-Impact model complete?" (RQ2): "Is the ISImpact model valid as a 1st-order formative, 2nd-order formative multidimensional construct?" The study adhered to the two-phase approach of Gable et al. (2008) to hypothesise and validate a measurement model. The initial ‘exploratory phase’ employed a zero base qualitative approach to re-instantiating the IS-Impact model in the HR context. The subsequent ‘confirmatory phase’ sought to validate the resultant hypothesised measurement model against newly gathered quantitative data. The unit of analysis for the study is the application, ‘ALESCO’, an integrated large-scale HR application implemented at Queensland University of Technology (QUT), a large Australian university (with approximately 40,000 students and 5000 staff). Target respondents of both study phases were ALESCO key-user-groups: strategic users, management users, operational users and technical users, who directly use ALESCO or its outputs. An open-ended, qualitative survey was employed in the exploratory phase, with the objective of exploring the completeness and applicability of the IS-Impact model’s dimensions and measures in the new context, and to conceptualise any resultant model changes to be operationalised in the confirmatory phase. Responses from 134 ALESCO users to the main survey question, "What do you consider have been the impacts of the ALESCO (HR) system in your division/department since its implementation?" were decomposed into 425 ‘impact citations.’ Citation mapping using a deductive (top-down) content analysis approach instantiated all dimensions and measures of the IS-Impact model, evidencing its content validity in the new context. Seeking to probe additional (perhaps negative) impacts; the survey included the additional open question "In your opinion, what can be done better to improve the ALESCO (HR) system?" Responses to this question decomposed into a further 107 citations which in the main did not map to IS-Impact, but rather coalesced around the concept of IS-Support. Deductively drawing from relevant literature, and working inductively from the unmapped citations, the new ‘IS-Support’ construct, including the four formative dimensions (i) training, (ii) documentation, (iii) assistance, and (iv) authorisation (each having reflective measures), was defined as: "a measure at a point in time, of the support, the [HR] information system key-user groups receive to increase their capabilities in utilising the system." Thus, a further goal of the study became validation of the IS-Support construct, suggesting the research question (RQ3): "Is IS-Support valid as a 1st-order reflective, 2nd-order formative multidimensional construct?" With the aim of validating IS-Impact within its nomological net (identification through structural relations), as in prior work, Satisfaction was hypothesised as its immediate consequence. The IS-Support construct having derived from a question intended to probe IS-Impacts, too was hypothesised as antecedent to Satisfaction, thereby suggesting the research question (RQ4): "What is the relative contribution of IS-Impact and IS-Support to Satisfaction?" With the goal of testing the above research questions, IS-Impact, IS-Support and Satisfaction were operationalised in a quantitative survey instrument. Partial least squares (PLS) structural equation modelling employing 221 valid responses largely evidenced the validity of the commencing IS-Impact model in the HR context. ISSupport too was validated as operationalised (including 11 reflective measures of its 4 formative dimensions). IS-Support alone explained 36% of Satisfaction; IS-Impact alone 70%; in combination both explaining 71% with virtually all influence of ISSupport subsumed by IS-Impact. Key study contributions to research include: (1) validation of IS-Impact in the HR context, (2) validation of a newly conceptualised IS-Support construct as important antecedent of Satisfaction, and (3) validation of the redundancy of IS-Support when gauging IS-Impact. The study also makes valuable contributions to practice, the research track and the sponsoring organisation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper provides a new general approach for defining coherent generators in power systems based on the coherency in low frequency inter-area modes. The disturbance is considered to be distributed in the network by applying random load changes which is the random walk representation of real loads instead of a single fault and coherent generators are obtained by spectrum analysis of the generators velocity variations. In order to find the coherent areas and their borders in the inter-connected networks, non-generating buses are assigned to each group of coherent generator using similar coherency detection techniques. The method is evaluated on two test systems and coherent generators and areas are obtained for different operating points to provide a more accurate grouping approach which is valid across a range of realistic operating points of the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A minimax filter is derived to estimate the state of a system, using observations corrupted by colored noise, when large uncertainties in the plant dynamics and process noise are presen.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a method of designing a minimax filter in the presence of large plant uncertainties and constraints on the mean squared values of the estimates. The minimax filtering problem is reformulated in the framework of a deterministic optimal control problem and the method of solution employed, invokes the matrix Minimum Principle. The constrained linear filter and its relation to singular control problems has been illustrated. For the class of problems considered here it is shown that the filter can he constrained separately after carrying out the mini maximization. Numorieal examples are presented to illustrate the results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper deals with low maximum-likelihood (ML)-decoding complexity, full-rate and full-diversity space-time block codes (STBCs), which also offer large coding gain, for the 2 transmit antenna, 2 receive antenna (2 x 2) and the 4 transmit antenna, 2 receive antenna (4 x 2) MIMO systems. Presently, the best known STBC for the 2 2 system is the Golden code and that for the 4 x 2 system is the DjABBA code. Following the approach by Biglieri, Hong, and Viterbo, a new STBC is presented in this paper for the 2 x 2 system. This code matches the Golden code in performance and ML-decoding complexity for square QAM constellations while it has lower ML-decoding complexity with the same performance for non-rectangular QAM constellations. This code is also shown to be information-lossless and diversity-multiplexing gain (DMG) tradeoff optimal. This design procedure is then extended to the 4 x 2 system and a code, which outperforms the DjABBA code for QAM constellations with lower ML-decoding complexity, is presented. So far, the Golden code has been reported to have an ML-decoding complexity of the order of for square QAM of size. In this paper, a scheme that reduces its ML-decoding complexity to M-2 root M is presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a training-based channel estimation scheme for large non-orthogonal space-time block coded (STBC) MIMO systems.The proposed scheme employs a block transmission strategy where an N-t x N-t pilot matrix is sent (for training purposes) followed by several N-t x N-t square data STBC matrices, where Nt is the number of transmit antennas. At the receiver, we iterate between channel estimation (using an MMSE estimator) and detection (using a low-complexity likelihood ascent search (LAS) detector) till convergence or for a fixed number of iterations. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed scheme at low complexities. The fact that we could show such good results for large STBCs (e.g., 16 x 16 STBC from cyclic division algebras) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot-based channel estimation and turbo coding) establishes the effectiveness of the proposed scheme.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a low-complexity algorithm based on reactive tabu search (RTS) for near maximum likelihood (ML) detection in large-MIMO systems. The conventional RTS algorithm achieves near-ML performance for 4-QAM in large-MIMO systems. But its performance for higher-order QAM is far from ML performance. Here, we propose a random-restart RTS (R3TS) algorithm which achieves significantly better bit error rate (BER) performance compared to that of the conventional RTS algorithm in higher-order QAM. The key idea is to run multiple tabu searches, each search starting with a random initial vector and choosing the best among the resulting solution vectors. A criterion to limit the number of searches is also proposed. Computer simulations show that the R3TS algorithm achieves almost the ML performance in 16 x 16 V-BLAST MIMO system with 16-QAM and 64-QAM at significantly less complexities than the sphere decoder. Also, in a 32 x 32 V-BLAST MIMO system, the R3TS performs close to ML lower bound within 1.6 dB for 16-QAM (128 bps/Hz), and within 2.4 dB for 64-QAM (192 bps/Hz) at 10(-3) BER.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Large MIMO systems with tens of antennas in each communication terminal using full-rate non-orthogonal space-time block codes (STBC) from Cyclic Division Algebras (CDA) can achieve the benefits of both transmit diversity as well as high spectral efficiencies. Maximum-likelihood (ML) or near-ML decoding of these large-sized STBCs at low complexities, however, has been a challenge. In this paper, we establish that near-ML decoding of these large STBCs is possible at practically affordable low complexities. We show that the likelihood ascent search (LAS) detector, reported earlier by us for V-BLAST, is able to achieve near-ML uncoded BER performance in decoding a 32x32 STBC from CDA, which employs 32 transmit antennas and sends 32(2) = 1024 complex data symbols in 32 time slots in one STBC matrix (i.e., 32 data symbols sent per channel use). In terms of coded BER, with a 16x16 STBC, rate-3/4 turbo code and 4-QAM (i.e., 24 bps/Hz), the LAS detector performs close to within just about 4 dB from the theoretical MIMO capacity. Our results further show that, with LAS detection, information lossless (ILL) STBCs perform almost as good as full-diversity ILL (FD-ILL) STBCs. Such low-complexity detectors can potentially enable implementation of high spectral efficiency large MIMO systems that could be considered in wireless standards.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a method for minimizing the sum of the square of voltage deviations by a least-square minimization technique, and thus improving the voltage profile in a given system by adjusting control variables, such as tap position of transformers, reactive power injection of VAR sources and generator excitations. The control variables and dependent variables are related by a matrix J whose elements are computed as the sensitivity matrix. Linear programming is used to calculate voltage increments that minimize transmission losses. The active and reactive power optimization sub-problems are solved separately taking advantage of the loose coupling between the two problems. The proposed algorithm is applied to IEEE 14-and 30-bus systems and numerical results are presented. The method is computationally fast and promises to be suitable for implementation in real-time dispatch centres.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Present day power systems are growing in size and complexity of operation with inter connections to neighboring systems, introduction of large generating units, EHV 400/765 kV AC transmission systems, HVDC systems and more sophisticated control devices such as FACTS. For planning and operational studies, it requires suitable modeling of all components in the power system, as the number of HVDC systems and FACTS devices of different type are incorporated in the system. This paper presents reactive power optimization with three objectives to minimize the sum of the squares of the voltage deviations (ve) of the load buses, minimization of sum of squares of voltage stability L-indices of load buses (¿L2), and also the system real power loss (Ploss) minimization. The proposed methods have been tested on typical sample system. Results for Indian 96-bus equivalent system including HVDC terminal and UPFC under normal and contingency conditions are presented.