943 resultados para fixed point method
Resumo:
A new control algorithm using parallel braking resistor (BR) and serial fault current limiter (FCL) for power system transient stability enhancement is presented in this paper. The proposed control algorithm can prevent transient instability during first swing by immediately taking away the transient energy gained in faulted period. It can also reduce generator oscillation time and efficiently make system back to the post-fault equilibrium. The algorithm is based on a new system energy function based method to choose optimal switching point. The parallel BR and serial FCL resistor can be switched at the calculated optimal point to get the best control result. This method allows optimum dissipation of the transient energy caused by disturbance so to make system back to equilibrium in minimum time. Case studies are given to verify the efficiency and effectiveness of this new control algorithm.
Resumo:
Over 60% of the recurrent budget of the Ministry of Health (MoH) in Angola is spent on the operations of the fixed health care facilities (health centres plus hospitals). However, to date, no study has been attempted to investigate how efficiently those resources are used to produce health services. Therefore the objectives of this study were to assess the technical efficiency of public municipal hospitals in Angola; assess changes in productivity over time with a view to analyzing changes in efficiency and technology; and demonstrate how the results can be used in the pursuit of the public health objective of promoting efficiency in the use of health resources. The analysis was based on a 3-year panel data from all the 28 public municipal hospitals in Angola. Data Envelopment Analysis (DEA), a non-parametric linear programming approach, was employed to assess the technical and scale efficiency and productivity change over time using Malmquist index.The results show that on average, productivity of municipal hospitals in Angola increased by 4.5% over the period 2000-2002; that growth was due to improvements in efficiency rather than innovation. © 2008 Springer Science+Business Media, LLC.
Resumo:
Having a fixed differential-group delay (DGD) term b′ in the coarse-step method results in a repetitive pattern in the autocorrelation function (ACF). We solve this problem by inserting a varying DGD term at each integration step. Furthermore we compute the range of values needed for b′ and simulate the phenomenon of polarisation mode dispersion for different statistical distributions of b′. We examine systematically the modified coarse-step method compared to the analytical model, through our simulation results. © 2006 Elsevier B.V. All rights reserved.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
The effect of having a fixed differential group delay term in the coarse step method results in a periodic pattern in the inserting a varying DGD term at each integration step, according to a Gaussian distribution. Simulation results are given to illustrate the phenomenon and provide some evidence about its statistical nature.
Resumo:
The thesis presents new methodology and algorithms that can be used to analyse and measure the hand tremor and fatigue of surgeons while performing surgery. This will assist them in deriving useful information about their fatigue levels, and make them aware of the changes in their tool point accuracies. This thesis proposes that muscular changes of surgeons, which occur through a day of operating, can be monitored using Electromyography (EMG) signals. The multi-channel EMG signals are measured at different muscles in the upper arm of surgeons. The dependence of EMG signals has been examined to test the hypothesis that EMG signals are coupled with and dependent on each other. The results demonstrated that EMG signals collected from different channels while mimicking an operating posture are independent. Consequently, single channel fatigue analysis has been performed. In measuring hand tremor, a new method for determining the maximum tremor amplitude using Principal Component Analysis (PCA) and a new technique to detrend acceleration signals using Empirical Mode Decomposition algorithm were introduced. This tremor determination method is more representative for surgeons and it is suggested as an alternative fatigue measure. This was combined with the complexity analysis method, and applied to surgically captured data to determine if operating has an effect on a surgeon’s fatigue and tremor levels. It was found that surgical tremor and fatigue are developed throughout a day of operating and that this could be determined based solely on their initial values. Finally, several Nonlinear AutoRegressive with eXogenous inputs (NARX) neural networks were evaluated. The results suggest that it is possible to monitor surgeon tremor variations during surgery from their EMG fatigue measurements.
Resumo:
A new instrument and method are described that allow the hydraulic conductivities of highly permeable porous materials, such as gravels in constructed wetlands, to be determined in the field. The instrument consists of a Mariotte siphon and a submersible permeameter cell with manometer take-off tubes, to recreate in-situ the constant head permeameter test typically used with excavated samples. It allows permeability to be measured at different depths and positions over the wetland. Repeatability obtained at fixed positions was good (normalised standard deviation of 1–4%), and results obtained for highly homogenous silica sand compared well when the sand was retested in a lab permeameter (0.32 mm.s–1 and 0.31 mm.s–1 respectively). Practical results have a ±30% associated degree of uncertainty because of the mixed effect of natural variation in gravel core profiles, and interstitial clogging disruption during insertion of the tube into the gravel. This error is small, however, compared to the orders of magnitude spatial variations detected. The technique was used to survey the hydraulic conductivity profile of two constructed wetlands in the UK, aged 1 and 15 years respectively. Measured values were high (up to 900 mm.s –1) and varied by three orders of magnitude, reflecting the immaturity of the wetland. Detailed profiling of the younger system suggested the existence of preferential flow paths at a depth of 200 mm, corresponding to the transition between more coarse and less coarse gravel layers (6–12 mm and 3–6 mm respectively), and transverse drift towards the outlet.
Resumo:
Terahertz optical asymmetric demultiplexors (TOADs) use a semiconductor optical amplifier in an interferometer to create an all-optical switch and have potential uses in many optical networking applications. Here we demonstrate and compare experimentally a novel and simple method of dramatically increasing the extinction ratio of the device using a symmetrical configuration as compared to a ‘traditional’ configuration. The new configuration is designed to suppress the occurrence of self-switching in the device thus allowing signal pulses to be used at higher power levels. Using the proposed configuration an increase in extinction ratio of 10 dB has been measured on the transmitted port whilst benefiting from an improved input signal power handling capability.
Resumo:
This paper investigates the random channel access mechanism specified in the IEEE 802.16 standard for the uplink traffic in a Point-to-MultiPoint (PMP) network architecture. An analytical model is proposed to study the impacts of the channel access parameters, bandwidth configuration and piggyback policy on the performance. The impacts of physical burst profile and non-saturated network traffic are also taken into account in the model. Simulations validate the proposed analytical model. It is observed that the bandwidth utilization can be improved if the bandwidth for random channel access can be properly configured according to the channel access parameters, piggyback policy and network traffic.
Resumo:
IEEE 802.16 standard specifies two contention based bandwidth request schemes working with OFDM physical layer specification in point-to-multipoint (PMP) architecture, the mandatory one used in region-full and the optional one used in region-focused. This letter presents a unified analytical model to study the bandwidth efficiency and channel access delay performance of the two schemes. The impacts of access parameters, available bandwidth and subchannelization have been taken into account. The model is validated by simulations. The mandatory scheme is observed to perform closely to the optional one when subchannelization is active for both schemes.
Resumo:
We present recent results on experimental micro-fabrication and numerical modeling of advanced photonic devices by means of direct writing by femtosecond laser. Transverse inscription geometry was routinely used to inscribe and modify photonic devices based on waveguiding structures. Typically, standard commercially available fibers were used as a template with a pre-fabricated waveguide. Using a direct, point-by-point inscription by infrared femtosecond laser, a range of fiber-based photonic devices was fabricated including Fiber Bragg Gratings (FBG) and Long Period Gratings (LPG). Waveguides with a core of a couple of microns, periodic structures, and couplers have been also fabricated in planar geometry using the same method.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional parabolic inverse Cauchy–Stefan problem, where boundary data and the initial condition are to be determined from the Cauchy data prescribed on a given moving interface. In [B.T. Johansson, D. Lesnic, and T. Reeve, A method of fundamental solutions for the one-dimensional inverse Stefan Problem, Appl. Math Model. 35 (2011), pp. 4367–4378], the inverse Stefan problem was considered, where only the boundary data is to be reconstructed on the fixed boundary. We extend the MFS proposed in Johansson et al. (2011) and show that the initial condition can also be simultaneously recovered, i.e. the MFS is appropriate for the inverse Cauchy-Stefan problem. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be efficiently obtained with small computational cost.
Resumo:
We present recent results on experimental micro-fabrication and numerical modeling of advanced photonic devices by means of direct writing by femtosecond laser. Transverse inscription geometry was routinely used to inscribe and modify photonic devices based on waveguiding structures. Typically, standard commercially available fibers were used as a template with a pre-fabricated waveguide. Using a direct, point-by-point inscription by infrared femtosecond laser, a range of fiber-based photonic devices was fabricated including Fiber Bragg Gratings (FBG) and Long Period Gratings (LPG). Waveguides with a core of a couple of microns, periodic structures, and couplers have been also fabricated in planar geometry using the same method.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
We extend a meshless method of fundamental solutions recently proposed by the authors for the one-dimensional two-phase inverse linear Stefan problem, to the nonlinear case. In this latter situation the free surface is also considered unknown which is more realistic from the practical point of view. Building on the earlier work, the solution is approximated in each phase by a linear combination of fundamental solutions to the heat equation. The implementation and analysis are more complicated in the present situation since one needs to deal with a nonlinear minimization problem to identify the free surface. Furthermore, the inverse problem is ill-posed since small errors in the input measured data can cause large deviations in the desired solution. Therefore, regularization needs to be incorporated in the objective function which is minimized in order to obtain a stable solution. Numerical results are presented and discussed. © 2014 IMACS.