967 resultados para Average method
Resumo:
BACKGROUND AND OBJECTIVES: Nerve blocks using local anesthetics are widely used. High volumes are usually injected, which may predispose patients to associated adverse events. Introduction of ultrasound guidance facilitates the reduction of volume, but the minimal effective volume is unknown. In this study, we estimated the 50% effective dose (ED50) and 95% effective dose (ED95) volume of 1% mepivacaine relative to the cross-sectional area of the nerve for an adequate sensory block. METHODS: To reduce the number of healthy volunteers, we used a volume reduction protocol using the up-and-down procedure according to the Dixon average method. The ulnar nerve was scanned at the proximal forearm, and the cross-sectional area was measured by ultrasound. In the first volunteer, a volume of 0.4 mL/mm of nerve cross-sectional area was injected under ultrasound guidance in close proximity to and around the nerve using a multiple injection technique. The volume in the next volunteer was reduced by 0.04 mL/mm in case of complete blockade and augmented by the same amount in case of incomplete sensory blockade within 20 mins. After 3 up-and-down cycles, ED50 and ED95 were estimated. Volunteers and physicians performing the block were blinded to the volume used. RESULTS: A total 17 of volunteers were investigated. The ED50 volume was 0.08 mL/mm (SD, 0.01 mL/mm), and the ED95 volume was 0.11 mL/mm (SD, 0.03 mL/mm). The mean cross-sectional area of the nerves was 6.2 mm (1.0 mm). CONCLUSIONS: Based on the ultrasound measured cross-sectional area and using ultrasound guidance, a mean volume of 0.7 mL represents the ED95 dose of 1% mepivacaine to block the ulnar nerve at the proximal forearm.
Resumo:
The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.
Resumo:
Rating systems are used by many websites, which allow customers to rate available items according to their own experience. Subsequently, reputation models are used to aggregate available ratings in order to generate reputation scores for items. A problem with current reputation models is that they provide solutions to enhance accuracy of sparse datasets not thinking of their models performance over dense datasets. In this paper, we propose a novel reputation model to generate more accurate reputation scores for items using any dataset; whether it is dense or sparse. Our proposed model is described as a weighted average method, where the weights are generated using the normal distribution. Experiments show promising results for the proposed model over state-of-the-art ones on sparse and dense datasets.
Resumo:
One of the most important dynamic properties required in the design of machine foundations is the stiffness or spring constant of the supporting soil. For a layered soil system, the stiffness obtained from an idealization of soils underneath as springs in series gives the same value of stiffness regardless of the location and extent of individual soil layers with respect to the base of the foundation. This paper aims to develop the importance of the relative positioning of soil layers and their thickness beneath the foundation. A simple and approximate procedure called the weighted average method has been proposed to obtain the equivalent stiffness of a layered soil system knowing the individual values of the layers, their relative position with respect to foundation base, and their thicknesses. The theoretically estimated values from the weighted average method are compared with those obtained by conducting field vibration tests using a square footing over different two- and three-layered systems and are found to be very good. The tests were conducted over a range of static and dynamic loads using three different materials. The results are also compared with the existing methods available in the literature.
Resumo:
The interaction of arbitrarily distributed penny-shaped cracks in three-dimensional solids is analyzed in this paper. Using oblate spheroidal coordinates and displacement functions, an analytic method is developed in which the opening and the sliding displacements on each crack surface are taken as the basic unknown functions. The basic unknown functions can be expanded in series of Legendre polynomials with unknown coefficients. Based on superposition technique, a set of governing equations for the unknown coefficients are formulated from the traction free conditions on each crack surface. The boundary collocation procedure and the average method for crack-surface tractions are used for solving the governing equations. The solution can be obtained for quite closely located cracks. Numerical examples are given for several crack problems. By comparing the present results with other existing results, one can conclude that the present method provides a direct and efficient approach to deal with three-dimensional solids containing multiple cracks.
Resumo:
A constitutive model, based on an (n + 1)-phase mixture of the Mori-Tanaka average theory, has been developed for stress-induced martensitic transformation and reorientation in single crystalline shape memory alloys. Volume fractions of different martensite lattice correspondence variants are chosen as internal variables to describe microstructural evolution. Macroscopic Gibbs free energy for the phase transformation is derived with thermodynamics principles and the ensemble average method of micro-mechanics. The critical condition and the evolution equation are proposed for both the phase transition and reorientation. This model can also simulate interior hysteresis loops during loading/unloading by switching the critical driving forces when an opposite transition takes place.
Resumo:
The study was conducted on 238 households in Bangladesh Agricultural University campus and its adjoining areas in Mymensingh. The household were divided into four groups based on their per capita income. Monthly expenditure on fish, income elasticity of demand and marginal propensity to consume were calculated. 'Weighted average' method was used to study the level of preference for fish by sex and age groups and frequency of its purchase. The per capita monthly expenditure on fish of overall households was found to be Tk. 178.83. The consumption increased considerably between and among the income groups rising from Tk. 63.95 in the lowest income group to Tk. 249.11 in the highest income group. Based on income elasticity the proportion of income spent on fish was found to be greater than the proportion of increase in income for lower middle and upper middle income groups. However, percent expenditure decreased from 8.15 in lowest to 5.49 in the highest income group. Female members between 20 and 40yrs had the highest preference for fish in general followed by male members of above 40 yrs. Children (0 to 8 yrs), on the other band, had the least preference for fish, Sing and Magur (Catfishes) were the most preferred fish species for each age and sex group. Rui, a carp, was the single most purchased fish while the introduced exotic fishes were the least bought. Freshness was found to be the most important factor followed by the appearance and taste perception that positively affected the fish purchase.
Resumo:
This work demonstrates an example of the importance of an adequate method to sub-sample model results when comparing with in situ measurements. A test of model skill was performed by employing a point-to-point method to compare a multi-decadal hindcast against a sparse, unevenly distributed historic in situ dataset. The point-to-point method masked out all hindcast cells that did not have a corresponding in situ measurement in order to match each in situ measurement against its most similar cell from the model. The application of the point-to-point method showed that the model was successful at reproducing the inter-annual variability of the in situ datasets. Furthermore, this success was not immediately apparent when the measurements were aggregated to regional averages. Time series, data density and target diagrams were employed to illustrate the impact of switching from the regional average method to the point-to-point method. The comparison based on regional averages gave significantly different and sometimes contradicting results that could lead to erroneous conclusions on the model performance. Furthermore, the point-to-point technique is a more correct method to exploit sparse uneven in situ data while compensating for the variability of its sampling. We therefore recommend that researchers take into account for the limitations of the in situ datasets and process the model to resemble the data as much as possible.
Resumo:
La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this thesis, we study the application of spectral representations to the solution of problems in seismic exploration, the synthesis of fractal surfaces and the identification of correlations between one-dimensional signals. We apply a new approach, called Wavelet Coherency, to the study of stratigraphic correlation in well log signals, as an attempt to identify layers from the same geological formation, showing that the representation in wavelet space, with introduction of scale domain, can facilitate the process of comparing patterns in geophysical signals. We have introduced a new model for the generation of anisotropic fractional brownian surfaces based on curvelet transform, a new multiscale tool which can be seen as a generalization of the wavelet transform to include the direction component in multidimensional spaces. We have tested our model with a modified version of the Directional Average Method (DAM) to evaluate the anisotropy of fractional brownian surfaces. We also used the directional behavior of the curvelets to attack an important problem in seismic exploration: the atenuation of the ground roll, present in seismograms as a result of surface Rayleigh waves. The techniques employed are effective, leading to sparse representation of the signals, and, consequently, to good resolutions
Resumo:
Peng was the first to work with the Technical DFA (Detrended Fluctuation Analysis), a tool capable of detecting auto-long-range correlation in time series with non-stationary. In this study, the technique of DFA is used to obtain the Hurst exponent (H) profile of the electric neutron porosity of the 52 oil wells in Namorado Field, located in the Campos Basin -Brazil. The purpose is to know if the Hurst exponent can be used to characterize spatial distribution of wells. Thus, we verify that the wells that have close values of H are spatially close together. In this work we used the method of hierarchical clustering and non-hierarchical clustering method (the k-mean method). Then compare the two methods to see which of the two provides the best result. From this, was the parameter � (index neighborhood) which checks whether a data set generated by the k- average method, or at random, so in fact spatial patterns. High values of � indicate that the data are aggregated, while low values of � indicate that the data are scattered (no spatial correlation). Using the Monte Carlo method showed that combined data show a random distribution of � below the empirical value. So the empirical evidence of H obtained from 52 wells are grouped geographically. By passing the data of standard curves with the results obtained by the k-mean, confirming that it is effective to correlate well in spatial distribution
Resumo:
OBJETIVO: Construir uma rede neural artificial para auxiliar os gestores de restaurantes universitários na previsão de refeições diárias. MÉTODOS: O estudo foi desenvolvido a partir do levantamento de oito variáveis que influenciam o número de refeições diárias servidas no restaurante universitário. Utiliza-se o algoritmo de treinamento Backpropagation. Os resultados por meio da rede são comparados com os da série estudada e com resultados da estimação por média aritmética simples. RESULTADOS: A rede proposta acompanha as inúmeras alterações que ocorrem no número de refeições diárias do restaurante universitário. em 73% dos dias analisados, o método das redes neurais artificiais apresenta uma taxa de acerto maior do que o método da média aritmética simples. CONCLUSÃO: A rede neural artificial mostrou-se mais adequada para a previsão do número de refeições do que a metodologia de média simples ou quando a decisão do número de refeições é feita de forma subjetiva, sem critérios científicos.
Resumo:
Pós-graduação em Matemática - IBILCE