144 resultados para Polyharmonic distortion modeling. X-parameters. Test-Bench. Planar structures. PHD


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of data derived from linked-segment models depends on how well the system has been represented. Previous investigations describing the gait of persons with partial foot amputation did not account for the unique anthropometry of the residuum or the inclusion of a prosthesis and footwear in the model and, as such, are likely to have underestimated the magnitude of the peak joint moments and powers. This investigation determined the effect of inaccuracies in the anthropometric input data on the kinetics of gait. Toward this end, a geometric model was developed and validated to estimate body segment parameters of various intact and partial feet. These data were then incorporated into customized linked-segment models, and the kinetic data were compared with that obtained from conventional models. Results indicate that accurate modeling increased the magnitude of the peak hip and knee joint moments and powers during terminal swing. Conventional inverse dynamic models are sufficiently accurate for research questions relating to stance phase. More accurate models that account for the anthropometry of the residuum, prosthesis, and footwear better reflect the work of the hip extensors and knee flexors to decelerate the limb during terminal swing phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Total deposition of petrol, diesel and environmental tobacco smoke (ETS) aerosols in the human respiratory tract for nasal breathing conditions was computed for 14 nonsmoking volunteers, considering the specific anatomical and respiratory parameters of each volunteer and the specific size distribution for each inhalation experiment. Theoretical predictions were 34.6% for petrol, 24.0% for diesel, and 18.5% for ETS particles. Compared to the experimental results, predicted deposition values were consistently smaller than the measured data (41.4% for petrol, 29.6% for diesel, and 36.2% for ETS particles). The apparent discrepancy between experimental data on total deposition and modeling results may be reconciled by considering the non-spherical shape of the test aerosols by diameter-dependent dynamic shape factors to account for differences between mobility-equivalent and volume-equivalent or thermodynamic diameters. While the application of dynamic shape factors is able to explain the observed differences for petrol and diesel particles, additional mechanisms may be required for ETS particle deposition, such as the size reduction upon inspiration by evaporation of volatile compounds and/or condensation-induced restructuring, and, possibly, electrical charge effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scalable video coding of H.264/AVC standard enables adaptive and flexible delivery for multiple devices and various network conditions. Only a few works have addressed the influence of different scalability parameters (frame rate, spatial resolution, and SNR) on the user perceived quality within a limited scope. In this paper, we have conducted an experiment of subjective quality assessment for video sequences encoded with H.264/SVC to gain a better understanding of the correlation between video content and UPQ at all scalable layers and the impact of rate-distortion method and different scalabilities on bitrate and UPQ. Findings from this experiment will contribute to a user-centered design of adaptive delivery of scalable video stream.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiovascular assist devices are tested in mock circulation loops (MCLs) prior to animal and clinical testing. These MCLs rely on characteristics such as pneumatic parameters to create pressure and flow, and pipe dimensions to replicate the resistance, compliance and fluid inertia of the natural cardiovascular system. A mathematical simulation was developed in SIMULINK to simulate an existing MCL. Model validation was achieved by applying the physical MCL characteristics to the simulation and comparing the resulting pressure traces. These characteristics were subsequently altered to improve and thus predict the performance of a more accurate physical system. The simulation was successful in simulating the physical mock circulation loop, and proved to be a useful tool in the development of improved cardiovascular device test rigs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: In 1970, Enright observed a distortion of perceived driving speed, induced by monocular application of a neutral density (ND) filter. If a driver looks out of the right side of a vehicle with a filter over the right eye, the driver perceives a reduction of the vehicle’s apparent velocity, while applying a ND filter over the left eye increases the vehicle’s apparent velocity. The purpose of the current study was to provide the first empirical measurements of the Enright phenomenon. Methods: Ten experienced drivers were tested and drove an automatic sedan on a closed road circuit. Filters (0.9 ND) were placed over the left, right or both eyes during a driving run, in addition to a control condition with no filters in place. Subjects were asked to look out of the right side of the car and adjust their driving speed to either 40 km/h or 60 km/h. Results: Without a filter or with both eyes filtered subjects showed good estimation of speed when asked to travel at 60 km/h but travelled a mean of 12 to 14 km/h faster than the requested 40 km/h. Subjects travelled faster than these baselines by a mean of 7 to 9 km/h (p < 0.001) with the filter over their right eye, and 3 to 5 km/h slower with the filter over their left eye (p < 0.05). Conclusions: The Enright phenomenon causes significant and measurable distortions of perceived driving speed under realworld driving conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years, many scholars have studied the conceptual modeling of information systems based on a theory of ontological expressiveness. This theory offers four constructs that inform properties of modeling grammars in the form of ontological deficiencies, and their implications for development and use of conceptual modeling in IS practice. In this paper we report on the development of a valid and reliable instrument for measuring the perceptions that individuals have of the ontological deficiencies of conceptual modeling grammars. We describe a multi-stage approach for instrument development that incorporates feedback from expert and user panels. We also report on a field test of the instrument with 590 modeling practitioners. We further study how different levels of modeling experience influence user perceptions of ontological deficiencies of modeling grammars. We provide implications for practice and future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists involved in the process modeling. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. Some systems have been developed to support collaborative process modeling, all of which use traditional 2D interfaces. We present an environment for collaborative process modeling, using 3D virtual environment technology. We make use of avatar instantiations of user ego centres, to allow for the spatial embodiment of the user with reference to the process model. We describe an innovative prototype collaborative process modeling approach, implemented as a modeling environment in Second Life. This approach leverages the use of virtual environments to provide user context for editing and collaborative exercises. We present a positive preliminary report on a case study, in which a test group modelled a business process using the system in Second Life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new method for winding configuration in planar magnetic elements with more than two layers. It has been proven by 3D Finite Element method and mathematical modeling that this suggested configuration results in reduction of the equivalent capacitive coupling in the planar inductor