946 resultados para finite difference time-domain analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we use sensor-annotated abstraction hierarchies (Reising & Sanderson, 1996, 2002a,b) to show that unless appropriately instrumented, configural displays designed according to the principles of ecological interface design (EID) might be vulnerable to misinterpretation when sensors become unreliable or are unavailable. Building on foundations established in Reising and Sanderson (2002a) we use a pasteurization process control example to show how sensor-annotated AHs help the analyst determine the impact of different instrumentation engineering policies on a configural display that is part of an ecological interface. Our analyses suggest that configural displays showing higher-order properties of a system are especially vulnerable under some conservative instrumentation configurations. However, sensor-annotated AHs can be used to indicate where corrective instrumentation might be placed. We argue that if EID is to be effectively employed in the design of displays for complex systems, then the information needs of the human operator need to be considered while instrumentation requirements are being formulated. Rasmussen's abstraction hierarchy-and particularly its extension to the analysis of information captured by sensors and derived from sensors-may therefore be a useful adjunct to up-stream instrumentation design. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we establish a foundation for understanding the instrumentation needs of complex dynamic systems if ecological interface design (EID)-based interfaces are to be robust in the face of instrumentation failures. EID-based interfaces often include configural displays which reveal the higher-order properties of complex systems. However, concerns have been expressed that such displays might be misleading when instrumentation is unreliable or unavailable. Rasmussen's abstraction hierarchy (AH) formalism can be extended to include representations of sensors near the functions or properties about which they provide information, resulting in what we call a sensor-annotated abstraction hierarchy. Sensor-annotated AHs help the analyst determine the impact of different instrumentation engineering policies on higher-order system information by showing how the data provided from individual sensors propagates within and across levels of abstraction in the AH. The use of sensor-annotated AHs with a configural display is illustrated with a simple water reservoir example. We argue that if EID is to be effectively employed in the design of interfaces for complex systems, then the information needs of the human operator need to be considered at the earliest stages of system development while instrumentation requirements are being formulated. In this way, Rasmussen's AH promotes a formative approach to instrumentation engineering. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Error condition detected We consider discrete two-point boundary value problems of the form D-2 y(k+1) = f (kh, y(k), D y(k)), for k = 1,...,n - 1, (0,0) = G((y(0),y(n));(Dy-1,Dy-n)), where Dy-k = (y(k) - Yk-I)/h and h = 1/n. This arises as a finite difference approximation to y" = f(x,y,y'), x is an element of [0,1], (0,0) = G((y(0),y(1));(y'(0),y'(1))). We assume that f and G = (g(0), g(1)) are continuous and fully nonlinear, that there exist pairs of strict lower and strict upper solutions for the continuous problem, and that f and G satisfy additional assumptions that are known to yield a priori bounds on, and to guarantee the existence of solutions of the continuous problem. Under these assumptions we show that there are at least three distinct solutions of the discrete approximation which approximate solutions to the continuous problem as the grid size, h, goes to 0. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

P-NET is a fieldbus industrial communication standard, which uses a Virtual Token Passing MAC mechanism. In this paper we establish pre-run-time schedulability conditions for supporting real-time traffic with P-NET. Essentially we provide formulae to evaluate the minimum message deadline, ensuring the transmission of real-time messages within a maximum time bound

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Using P-wave signal-averaged electrocardiography, we assessed the patterns of atrial electrical activation in patients with idiopathic atrial fibrillation as compared with patterns in patients with atrial fibrillation associated with structural heart disease. METHODS: Eighty patients with recurrent paroxysmal atrial fibrillation were divided into 3 groups as follows: group I - 40 patients with atrial fibrillation associated with non-rheumatic heart disease; group II - 25 patients with rheumatic atrial fibrillation; and group III - 15 patients with idiopathic atrial fibrillation. All patients underwent P-wave signal-averaged electrocardiography for frequency-domain analysis using spectrotemporal mapping and statistical techniques for detecting and quantifying intraatrial conduction disturbances. RESULTS: We observed an important fragmentation in atrial electrical conduction in 27% of the patients in group I, 64% of the patients in group II, and 67% of the patients in group III (p=0.003). CONCLUSION: Idiopathic atrial fibrillation has important intraatrial conduction disturbances. These alterations are similar to those observed in individuals with rheumatic atrial fibrillation, suggesting the existence of some degree of structural involvement of the atrial myocardium that cannot be detected with conventional electrocardiography and echocardiography.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper aims to examine the empirical relationship between trade openness and economic growth of India for the time period 1970-2010. Trade openness is a multi-dimensional concept and hence measures of both trade barriers and trade volumes have been used as proxies for openness. The estimation results from Vector Autoregressive method suggest that growth in trade volumes accelerate economic growth in case of India. We do not find any evidence from our analysis that trade barriers lower growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A compositional time series is obtained when a compositional data vector is observed atdifferent points in time. Inherently, then, a compositional time series is a multivariatetime series with important constraints on the variables observed at any instance in time.Although this type of data frequently occurs in situations of real practical interest, atrawl through the statistical literature reveals that research in the field is very much in itsinfancy and that many theoretical and empirical issues still remain to be addressed. Anyappropriate statistical methodology for the analysis of compositional time series musttake into account the constraints which are not allowed for by the usual statisticaltechniques available for analysing multivariate time series. One general approach toanalyzing compositional time series consists in the application of an initial transform tobreak the positive and unit sum constraints, followed by the analysis of the transformedtime series using multivariate ARIMA models. In this paper we discuss the use of theadditive log-ratio, centred log-ratio and isometric log-ratio transforms. We also presentresults from an empirical study designed to explore how the selection of the initialtransform affects subsequent multivariate ARIMA modelling as well as the quality ofthe forecasts

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tiivistelmä: TDR-mittausten kalibrointi viljeltyjen turvemaiden kosteuden mittaamiseen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and objective: Cefepime was one of the most used broad-spectrum antibiotics in Swiss public acute care hospitals. The drug was withdrawn from market in January 2007, and then replaced by a generic since October 2007. The goal of the study was to evaluate changes in the use of broad-spectrum antibiotics after the withdrawal of the cefepime original product. Design: A generalized regression-based interrupted time series model incorporating autocorrelated errors assessed how much the withdrawal changed the monthly use of other broad-spectrum antibiotics (ceftazidime, imipenem/cilastin, meropenem, piperacillin/ tazobactam) in defined daily doses (DDD)/100 bed-days from January 2004 to December 2008 [1, 2]. Setting: 10 Swiss public acute care hospitals (7 with\200 beds, 3 with 200-500 beds). Nine hospitals (group A) had a shortage of cefepime and 1 hospital had no shortage thanks to importation of cefepime from abroad. Main outcome measures: Underlying trend of use before the withdrawal, and changes in the level and in the trend of use after the withdrawal. Results: Before the withdrawal, the average estimated underlying trend (coefficient b1) for cefepime was decreasing by -0.047 (95% CI -0.086, -0.009) DDD/100 bed-days per month and was significant in three hospitals (group A, P\0.01). Cefepime withdrawal was associated with a significant increase in level of use (b2) of piperacillin/tazobactam and imipenem/cilastin in, respectively, one and five hospitals from group A. After the withdrawal, the average estimated trend (b3) was greatest for piperacillin/tazobactam (+0.043 DDD/100 bed-days per month; 95% CI -0.001, 0.089) and was significant in four hospitals from group A (P\0.05). The hospital without drug shortage showed no significant change in the trend and the level of use. The hypothesis of seasonality was rejected in all hospitals. Conclusions: The decreased use of cefepime already observed before its withdrawal from the market could be explained by pre-existing difficulty in drug supply. The withdrawal of cefepime resulted in change in level for piperacillin/tazobactam and imipenem/cilastin. Moreover, an increase in trend was found for piperacillin/tazobactam thereafter. As these changes generally occur at the price of lower bacterial susceptibility, a manufacturers' commitment to avoid shortages in the supply of their products would be important. As perspectives, we will measure the impact of the changes in cost and sensitivity rates of these antibiotics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wastewater application to soil is an alternative for fertilization and water reuse. However, particular care must be taken with this practice, since successive wastewater applications can cause soil salinization. Time-domain reflectometry (TDR) allows for the simultaneous and continuous monitoring of both soil water content and apparent electrical conductivity and thus for the indirect measurement of the electrical conductivity of the soil solution. This study aimed to evaluate the suitability of TDR for the indirect determination of the electrical conductivity (ECse) of the saturated soil extract by using an empirical equation for the apparatus TDR Trase 6050X1. Disturbed soil samples saturated with swine wastewater were used, at soil proportions of 0, 0.45, 0.90, 1.80, 2.70, and 3.60 m³ m-3. The probes were equipped with three handmade 0.20 cm long rods. The fit of the empirical model that associated the TDR measured values of electrical conductivity (EC TDR) to ECse was excellent, indicating this approach as suitable for the determination of electrical conductivity of the soil solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the difficulty of estimating water percolation in unsaturated soils, the purpose of this study was to estimate water percolation based on time-domain reflectometry (TDR). In two drainage lysimeters with different soil textures TDR probes were installed, forming a water monitoring system consisting of different numbers of probes. The soils were saturated and covered with plastic to prevent evaporation. Tests of internal drainage were carried out using a TDR 100 unit with constant dielectric readings (every 15 min). To test the consistency of TDR-estimated percolation levels in comparison with the observed leachate levels in the drainage lysimeters, the combined null hypothesis was tested at 5 % probability. A higher number of probes in the water monitoring system resulted in an approximation of the percolation levels estimated from TDR - based moisture data to the levels measured by lysimeters. The definition of the number of probes required for water monitoring to estimate water percolation by TDR depends on the soil physical properties. For sandy clay soils, three batteries with four probes installed at depths of 0.20, 0.40, 0.60, and 0.80 m, at a distance of 0.20, 0.40 and 0.6 m from the center of lysimeters were sufficient to estimate percolation levels equivalent to the observed. In the sandy loam soils, the observed and predicted percolation levels were not equivalent even when using four batteries with four probes each, at depths of 0.20, 0.40, 0.60, and 0.80 m.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new method to analyze timeinvariant linear networks allowing the existence of inconsistent initial conditions. This method is based on the use of distributions and state equations. Any time-invariant linear network can be analyzed. The network can involve any kind of pure or controlled sources. Also, the transferences of energy that occur at t=O are determined, and the concept of connection energy is introduced. The algorithms are easily implemented in a computer program.