970 resultados para Structural Complexity
Resumo:
The literature abounds with descriptions of failures in high-profile projects and a range of initiatives has been generated to enhance project management practice (e.g., Morris, 2006). Estimating from our own research, there are scores of other project failures that are unrecorded. Many of these failures can be explained using existing project management theory; poor risk management, inaccurate estimating, cultures of optimism dominating decision making, stakeholder mismanagement, inadequate timeframes, and so on. Nevertheless, in spite of extensive discussion and analysis of failures and attention to the presumed causes of failure, projects continue to fail in unexpected ways. In the 1990s, three U.S. state departments of motor vehicles (DMV) cancelled major projects due to time and cost overruns and inability to meet project goals (IT-Cortex, 2010). The California DMV failed to revitalize their drivers’ license and registration application process after spending $45 million. The Oregon DMV cancelled their five year, $50 million project to automate their manual, paper-based operation after three years when the estimates grew to $123 million; its duration stretched to eight years or more and the prototype was a complete failure. In 1997, the Washington state DMV cancelled their license application mitigation project because it would have been too big and obsolete by the time it was estimated to be finished. There are countless similar examples of projects that have been abandoned or that have not delivered the requirements.
Resumo:
Column elements at a certain level in building are subjected to loads from different tributary areas. Consequently, differential axial deformation among these elements occurs. Adverse effects of differential axial deformation increase with building height and geometric complexity. Vibrating wire, electronic strain and external mechanical strain gauges are used to measure the axial deformations to take adequate provisions to mitigate the adverse effects. These gauges require deploying in or on the elements during their construction in order to acquire necessary measurements continuously. The use of these gauges is therefore inconvenient and uneconomical. This highlights the need for a method to quantify the axial deformation using ambient measurements. This paper proposes a comprehensive vibration based method. The unique capabilities of the proposed method present through an illustrative example.
Resumo:
Differential axial deformation between column elements and shear wall elements of cores increase with building height and geometric complexity. Adverse effects due to the differential axial deformation reduce building performance and life time serviceability. Quantifying axial deformations using ambient measurements from vibrating wire, external mechanical and electronic strain gauges in order to acquire adequate provisions to mitigate the adverse effects is well established method. However, these gauges require installing in or on elements to acquire continuous measurements and hence use of these gauges is uneconomical and inconvenient. This motivates to develop a method to quantify the axial deformations. This paper proposes an innovative method based on modal parameters to quantify axial deformations of shear wall elements in cores of buildings. Capabilities of the method are presented though an illustrative example.
Resumo:
The molecular structure of the mineral archerite ((K,NH4)H2PO4) has been determined and compared with that of biphosphammite ((NH4,K)H2PO4). Raman spectroscopy and infrared spectroscopy has been used to characterise these ‘cave’ minerals. Both minerals originated from the Murra-el-elevyn Cave, Eucla, Western Australia. The mineral is formed by the reaction of the chemicals in bat guano with calcite substrates. Raman and infrared bands are assigned to H2PO4-, OH and NH stretching vibrations. The Raman band at 981 cm-1 is assigned to the HOP stretching vibration. Bands in the 1200 to 1800 cm-1 region are associated with NH4+ bending modes. The molecular structure of the two minerals appear to be very similar, and it is therefore concluded that the two minerals are identical.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.