981 resultados para Output-only Modal Analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents two case studies that suggest, in different but complementary ways, that the critical tool of frame analysis (Entman, 2002) has a place not only in the analytical environments of critical media research and media studies classes, where it is commonly found, but also in the media-production oriented environments of skills-based journalism training and even the newsroom. The expectations and constraints of both the latter environments, however, necessitate forms of frame analysis that are quick and simple. While commercial pressures mean newsrooms and skills-based journalism-training environments are likely to allow only an oversimplified approach to frame analysis, we argue that even a simple understanding and analysis at the production end could help to shift framing in ways that not only improve the quality and depth of Australasian newspapers' news coverage, but increase reader satisfaction with media output.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conventional bioimpedance spectrometers measure resistance and reactance over a range of frequencies and, by application of a mathematical model for an equivalent circuit (the Cole model), estimate resistance at zero and infinite frequencies. Fitting of the experimental data to the model is accomplished by iterative, nonlinear curve fitting. An alternative fitting method is described that uses only the magnitude of the measured impedances at four selected frequencies. The two methods showed excellent agreement when compared using data obtained both from measurements of equivalent circuits and of humans. These results suggest that operational equivalence to a technically complex, frequency-scanning, phase-sensitive BIS analyser could be achieved from a simple four-frequency, impedance-only analyser.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main advantage of Data Envelopment Analysis (DEA) is that it does not require any priori weights for inputs and outputs and allows individual DMUs to evaluate their efficiencies with the input and output weights that are only most favorable weights for calculating their efficiency. It can be argued that if DMUs are experiencing similar circumstances, then the pricing of inputs and outputs should apply uniformly across all DMUs. That is using of different weights for DMUs makes their efficiencies unable to be compared and not possible to rank them on the same basis. This is a significant drawback of DEA; however literature observed many solutions including the use of common set of weights (CSW). Besides, the conventional DEA methods require accurate measurement of both the inputs and outputs; however, crisp input and output data may not relevant be available in real world applications. This paper develops a new model for the calculation of CSW in fuzzy environments using fuzzy DEA. Further, a numerical example is used to show the validity and efficacy of the proposed model and to compare the results with previous models available in the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An optical in-fiber modal interferometer-based volume strain sensor for earthquake prediction is proposed and experimentally demonstrated. The sensing element is formed by wrapping a multimode-singlemode-multimode fiber structure onto a polyurethane hollow column. Due to the modal interference between the excited guided modes in the fiber, strong interference pattern could be observed in the transmission spectrum. Theoretical analysis verifies that the resonant wavelength shifts as a result of the volume strain variation caused by the column deformation with square root relationship. Sensitivity > 3.93 pm/με within the volume strain ranging from 0 to 1300 με is also experimentally demonstrated. By taking the response of bidirectional change of volume strain and the sluggish character of the employed sensing material into consideration, the sensing system presents good repeatability and stability. © 2001-2012 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the nonparametric framework of Data Envelopment Analysis the statistical properties of its estimators have been investigated and only asymptotic results are available. For DEA estimators results of practical use have been proved only for the case of one input and one output. However, in the real world problems the production process is usually well described by many variables. In this paper a machine learning approach to variable aggregation based on Canonical Correlation Analysis is presented. This approach is applied for efficiency estimation of all the farms in Terceira Island of the Azorean archipelago.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AMS subject classification: 90B60, 90B50, 90A80.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Incumbent telecommunication lasers emitting at 1.5 µm are fabricated on InP substrates and consist of multiple strained quantum well layers of the ternary alloy InGaAs, with barriers of InGaAsP or InGaAlAs. These lasers have been seen to exhibit very strong temperature dependence of the threshold current. This strong temperature dependence leads to a situation where external cooling equipment is required to stabilise the optical output power of these lasers. This results in a significant increase in the energy bill associated with telecommunications, as well as a large increase in equipment budgets. If the exponential growth trend of end user bandwidth demand associated with the internet continues, these inefficient lasers could see the telecommunications industry become the dominant consumer of world energy. For this reason there is strong interest in developing new, much more efficient telecommunication lasers. One avenue being investigated is the development of quantum dot lasers on InP. The confinement experienced in these low dimensional structures leads to a strong perturbation of the density of states at the band edge, and has been predicted to result in reduced temperature dependence of the threshold current in these devices. The growth of these structures is difficult due to the large lattice mismatch between InP and InAs; however, recently quantum dots elongated in one dimension, known as quantum dashes, have been demonstrated. Chapter 4 of this thesis provides an experimental analysis of one of these quantum dash lasers emitting at 1.5 µm along with a numerical investigation of threshold dynamics present in this device. Another avenue being explored to increase the efficiency of telecommunications lasers is bandstructure engineering of GaAs-based materials to emit at 1.5 µm. The cause of the strong temperature sensitivity in InP-based quantum well structures has been shown to be CHSH Auger recombination. Calculations have shown and experiments have verified that the addition of bismuth to GaAs strongly reduces the bandgap and increases the spin orbit splitting energy of the alloy GaAs1−xBix. This leads to a bandstructure condition at x = 10 % where not only is 1.5 µm emission achieved on GaAs-based material, but also the bandstructure of the material can naturally suppress the costly CHSH Auger recombination which plagues InP-based quantum-well-based material. It has been predicted that telecommunications lasers based on this material system should operate in the absence of external cooling equipment and offer electrical and optical benefits over the incumbent lasers. Chapters 5, 6, and 7 provide a first analysis of several aspects of this material system relevant to the development of high bismuth content telecommunication lasers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.