967 resultados para Average Method
Resumo:
In the English literature, facial approximation methods have been commonly classified into three types: Russian, American, or Combination. These categorizations are based on the protocols used, for example, whether methods use average soft-tissue depths (American methods) or require face muscle construction (Russian methods). However, literature searches outside the usual realm of English publications reveal key papers that demonstrate that the Russian category above has been founded on distorted views. In reality, Russian methods are based on limited face muscle construction, with heavy reliance on modified average soft-tissue depths. A closer inspection of the American method also reveals inconsistencies with the recognized classification scheme. This investigation thus demonstrates that all major methods of facial approximation depend on both face anatomy and average soft-tissue depths, rendering common method classification schemes redundant. The best way forward appears to be for practitioners to describe the methods they use (including the weight each one gives to average soft-tissue depths and deep face tissue construction) without placing them in any categorical classificatory group or giving them an ambiguous name. The state of this situation may need to be reviewed in the future in light of new research results and paradigms.
Resumo:
We present a theoretical method for a direct evaluation of the average error exponent in Gallager error-correcting codes using methods of statistical physics. Results for the binary symmetric channel(BSC)are presented for codes of both finite and infinite connectivity.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.
Resumo:
Over 60% of the recurrent budget of the Ministry of Health (MoH) in Angola is spent on the operations of the fixed health care facilities (health centres plus hospitals). However, to date, no study has been attempted to investigate how efficiently those resources are used to produce health services. Therefore the objectives of this study were to assess the technical efficiency of public municipal hospitals in Angola; assess changes in productivity over time with a view to analyzing changes in efficiency and technology; and demonstrate how the results can be used in the pursuit of the public health objective of promoting efficiency in the use of health resources. The analysis was based on a 3-year panel data from all the 28 public municipal hospitals in Angola. Data Envelopment Analysis (DEA), a non-parametric linear programming approach, was employed to assess the technical and scale efficiency and productivity change over time using Malmquist index.The results show that on average, productivity of municipal hospitals in Angola increased by 4.5% over the period 2000-2002; that growth was due to improvements in efficiency rather than innovation. © 2008 Springer Science+Business Media, LLC.
Resumo:
To carry out stability and voltage regulation studies on more electric aircraft systems in which there is a preponderance of multi-pulse, rectifier-fed motor-drive equipment, average dynamic models of the rectifier converters are required. Existing methods are difficult to apply to anything other than single converters with a low pulse number. Therefore an efficient, compact method for deriving the approximate, linear, average model of 6- and 12-pulse rectifiers, based on the assumption of a small duration of the overlap angle is presented. The models are validated against detailed simulations and laboratory prototypes.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
The relative distribution of rare-earth ions R3+ (Dy3+ or Ho3+) in the phosphate glass RAl0.30P3.05O9.62 was measured by employing the method of isomorphic substitution in neutron diffraction and, by taking the role of Al into explicit account, a self-consistent model of the glass structure was developed. The glass network is found to be made from corner sharing PO4 tetrahedra in which there are, on average, 2.32(9) terminal oxygen atoms, OT, at 1.50(1) Å and 1.68(9) bridging oxygen atoms, OB, at 1.60(1) Å. The network modifying R3+ ions bind to an average of 6.7(1) OT and are distributed such that 7.9(7) R–R nearest neighbours reside at 5.62(6) Å. The Al3+ ion also has a network modifying role in which it helps to strengthen the glass through the formation of OT–Al–OT linkages. The connectivity of the R-centred coordination polyhedra in (M2O3)x(P2O5)1−x glasses, where M3+ denotes a network modifying cation (R3+ or Al3+), is quantified in terms of a parameter fs. Methods for reducing the clustering of rare-earth ions in these materials are then discussed, based on a reduction of fs via the replacement of R3+ by Al3+ at fixed total modifier content or via a change of x to increase the number of OT available per network modifying M3+ cation.
Resumo:
Neutron diffraction was used to measure the total structure factors for several rare-earth ion R3+ (La3+ or Ce3+) phosphate glasses with composition close to RAl0.35P3.24O10.12. By assuming isomorphic structures, difference function methods were employed to separate, essentially, those correlations involving R3+ from the remainder. A self-consistent model of the glass structure was thereby developed in which the Al correlations were taken into explicit account. The glass network was found to be made from interlinked PO4 tetrahedra having 2.2(1) terminal oxygen atoms, OT, at 1.51(1) Angstrom, and 1.8(1) bridging oxygen atoms, OB, at 1.60(1) Angstrom. Rare-earth cations bonded to an average of 7.5(2) OT nearest neighbors in a broad and asymmetric distribution. The Al3+ ion acted as a network modifier and formed OT-A1-OT linkages that helped strengthen the glass. The connectivity of the R-centered coordination polyhedra was quantified in terms of a parameter f(s) and used to develop a model for the dependence on composition of the A1-OT coordination number in R-A1-P-O glasses. By using recent 17 A1 nuclear-magnetic-resonance data, it was shown that this connectivity decreases monotonically with increasing Al content. The chemical durability of the glasses appeared to be at a maximum when the connectivity of the R-centered coordination polyhedra was at a minimum. The relation of f(s) to the glass transition temperature, Tg, was discussed.
Resumo:
In this paper a constructive method of data structures solving an array maintenance problem is offered. These data structures are defined in terms of a family of digraphs which have previously been defined, representing solutions for this problem. We present as well a prototype of the method in Haskell.
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Reductions of peak-to-average power ratio and optical beat interference in cost-effective OFDMA-PONs
Resumo:
The peak-to-average power ratio (PAPR) and optical beat interference (OBI) effects are examined thoroughly in orthogonal frequency-division multiplexing access (OFDMA)-passive optical networks (PONs) at a signal bit rate up to ∼ 20 Gb/s per channel using cost-effective intensity-modulation and direct-detection (IM/DD). Single-channel OOFDM and upstream multichannel OFDM-PONs are investigated for up to six users. A number of techniques for mitigating the PAPR and OBI effects are presented and evaluated including adaptive-loading algorithms such as bit/power-loading, clipping for PAPR reduction, and thermal detuning (TD) for the OBI suppression. It is shown that the bit-loading algorithm is a very efficient PAPR reduction technique by reducing it at about 1.2 dB over 100 Km of transmission. It is also revealed that the optimum method for suppressing the OBI is the TD + bit-loading. For a targeted BER of 1 × 10-3, the minimum allowed channel spacing is 11 GHz when employing six users. © 2013 Springer Science+Business Media New York.
Resumo:
A new measure called “implicit rating” is introduced which might be a component of an early warning system. The proposed methodology relies on the aggregation of experts’ knowledge hidden in the transactional data of the interbank market of unsecured loans. Banks are simultaneously assessing the creditworthiness of each other which is reflected in the partner limits and in the interest rates. In the Hungarian interbank market the overall trading volume and the average interest rate did not show any negative trends before the crisis of 2008, however the average implicit partner limit started to decrease several months earlier, hence it might serve as a stress indicator.
Resumo:
As congestion management strategies begin to put more emphasis on person trips than vehicle trips, the need for vehicle occupancy data has become more critical. The traditional methods of collecting these data include the roadside windshield method and the carousel method. These methods are labor-intensive and expensive. An alternative to these traditional methods is to make use of the vehicle occupancy information in traffic accident records. This method is cost effective and may provide better spatial and temporal coverage than the traditional methods. However, this method is subject to potential biases resulting from under- and over-involvement of certain population sectors and certain types of accidents in traffic accident records. In this dissertation, three such potential biases, i.e., accident severity, driver’s age, and driver’s gender, were investigated and the corresponding bias factors were developed as needed. The results show that although multi-occupant vehicles are involved in higher percentages of severe accidents than are single-occupant vehicles, multi-occupant vehicles in the whole accident vehicle population were not overrepresented in the accident database. On the other hand, a significant difference was found between the distributions of the ages and genders of drivers involved in accidents and those of the general driving population. An information system that incorporates adjustments for the potential biases was developed to estimate the average vehicle occupancies (AVOs) for different types of roadways on the Florida state roadway system. A reasonableness check of the results from the system shows AVO estimates that are highly consistent with expectations. In addition, comparisons of AVOs from accident data with the field estimates show that the two data sources produce relatively consistent results. While accident records can be used to obtain the historical AVO trends and field data can be used to estimate the current AVOs, no known methods have been developed to project future AVOs. Four regression models for the purpose of predicting weekday AVOs on different levels of geographic areas and roadway types were developed as part of this dissertation. The models show that such socioeconomic factors as income, vehicle ownership, and employment have a significant impact on AVOs.
Resumo:
Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^