75 resultados para Limiar de fadiga eletromiográfico
Resumo:
The relevance of rising healthcare costs is a main topic in complementary health companies in Brazil. In 2011, these expenses consumed more than 80% of the monthly health insurance in Brazil. Considering the administrative costs, it is observed that the companies operating in this market work, on average, at the threshold between profit and loss. This paper presents results after an investigation of the welfare costs of a health plan company in Brazil. It was based on the KDD process and explorative Data Mining. A diversity of results is presented, such as data summarization, providing compact descriptions of the data, revealing common features and intrinsic observations. Among the key findings was observed that a small portion of the population is responsible for the most demanding of resources devoted to health care
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
The static and cyclic assays are common to test materials in structures.. For cycling assays to assess the fatigue behavior of the material and thereby obtain the S-N curves and these are used to construct the diagrams of living constant. However, these diagrams, when constructed with small amounts of S-N curves underestimate or overestimate the actual behavior of the composite, there is increasing need for more testing to obtain more accurate results. Therewith, , a way of reducing costs is the statistical analysis of the fatigue behavior. The aim of this research was evaluate the probabilistic fatigue behavior of composite materials. The research was conducted in three parts. The first part consists of associating the equation of probability Weilbull equations commonly used in modeling of composite materials S-N curve, namely the exponential equation and power law and their generalizations. The second part was used the results obtained by the equation which best represents the S-N curves of probability and trained a network to the modular 5% failure. In the third part, we carried out a comparative study of the results obtained using the nonlinear model by parts (PNL) with the results of a modular network architecture (MN) in the analysis of fatigue behavior. For this we used a database of ten materials obtained from the literature to assess the ability of generalization of the modular network as well as its robustness. From the results it was found that the power law of probability generalized probabilistic behavior better represents the fatigue and composites that although the generalization ability of the MN that was not robust training with 5% failure rate, but for values mean the MN showed more accurate results than the PNL model
Resumo:
In recent years there has been a significant growth in technologies that modify implant surfaces, reducing healing time and allowing their successful use in areas with low bone density. One of the most widely used techniques is plasma nitration, applied with excellent results in titanium and its alloys, with greater frequency in the manufacture of hip, ankle and shoulder implants. However, its use in dental implants is very limited due to high process temperatures (between 700 C o and 800 C o ), resulting in distortions in these geometrically complex and highly precise components. The aim of the present study is to assess osseointegration and mechanical strength of grade II nitrided titanium samples, through configuration of hollow cathode discharge. Moreover, new formulations are proposed to determine the optimum structural topology of the dental implant under study, in order to perfect its shape, make it efficient, competitive and with high definition. In the nitriding process, the samples were treated at a temperature of 450 C o and pressure of 150 Pa , during 1 hour of treatment. This condition was selected because it obtains the best wettability results in previous studies, where different pressure, temperature and time conditions were systematized. The samples were characterized by X-ray diffraction, scanning electron microscope, roughness, microhardness and wettability. Biomechanical fatigue tests were then conducted. Finally, a formulation using the three dimensional structural topology optimization method was proposed, in conjunction with an hadaptive refinement process. The results showed that plasma nitriding, using the hollow cathode discharge technique, caused changes in the surface texture of test specimens, increases surface roughness, wettability and microhardness when compared to the untreated sample. In the biomechanical fatigue test, the treated implant showed no flaws, after five million cycles, at a maximum fatigue load of 84.46 N. The results of the topological optimization process showed well-defined optimized layouts of the dental implant, with a clear distribution of material and a defined edge
Resumo:
The competitiveness of the trade generated by the higher availability of products with lower quality and cost promoted a new reality of industrial production with small clearances. Track deviations at the production are not discarded, uncertainties can statistically occur. The world consumer and the Brazilian one are supported by the consumer protection code, in lawsuits against the products poor quality. An automobile is composed of various systems and thousands of constituent parts, increasing the likelihood of failure. The dynamic and security systems are critical in relation to the consequences of possible failures. The investigation of the failure gives us the possibility of learning and contributing to various improvements. Our main purpose in this work is to develop a systematic, specific methodology by investigating the root cause of the flaw occurred on an axle end of the front suspension of an automobile, and to perform comparative data analyses between the fractured part and the project information. Our research was based on a flaw generated in an automotive suspension system involved in a mechanical judicial cause, resulting in property and personal damages. In the investigations concerning the analysis of mechanical flaws, knowledge on materials engineering plays a crucial role in the process, since it enables applying techniques for characterizing materials, relating the technical attributes required from a respective part with its structure of manufacturing material, thus providing a greater scientific contribution to the work. The specific methodology developed follows its own flowchart. In the early phase, the data in the records and information on the involved ones were collected. The following laboratory analyses were performed: macrography of the fracture, micrography with SEM (Scanning Electron Microscope) of the initial and final fracture, phase analysis with optical microscopy, Brinell hardness and Vickers microhardness analyses, quantitative and qualitative chemical analysis, by using X-ray fluorescence and optical spectroscopy for carbon analysis, qualitative study on the state of tension was done. Field data were also collected. In the analyses data of the values resulting from the fractured stock parts and the design values were compared. After the investigation, one concluded that: the developed methodology systematized the investigation and enabled crossing data, thus minimizing diagnostic error probability, the morphology of the fracture indicates failure by the fatigue mechanism in a geometrically propitious location, a tension hub, the part was subjected to low tensions by the sectional area of the final fracture, the manufacturing material of the fractured part has low ductility, the component fractured in an earlier moment than the one recommended by the manufacturer, the percentages of C, Si, Mn and Cr of the fractured part present values which differ from the design ones, the hardness value of the superior limit of the fractured part is higher than that of the design, and there is no manufacturing uniformity between stock and fractured part. The work will contribute to optimizing the guidance of the actions in a mechanical engineering judicial expertise
Resumo:
Among the main challenges in the beer industrial production is the market supply at the lowest cost and high quality, in order to ensure the expectations of customers and. consumers The beer fermentation stage represents approximately 70% of the whole time necessary to its production, having a obligatoriness of strict process controls to avoid becoming bottleneck in beer production. This stage is responsible for the formation of a series of subproducts, which are responsible for the composition of aroma/bouquet existing in beer and some of these subproducts, if produced in larger quantities, they will confer unpleasant taste and odor to the final product. Among the subproducts formed during the fermentation stage, total vicinal diketones is the main component, since it is limiting for product transfusion to the subsequent steps, besides having a low perception threshold by the consumer and giving undesirable taste and odor. Due to the instability of main raw materials quality and also process controls during fermentation, the development of alternative forms of beer production without impacting on total fermentation time and final product quality is a great challenge to breweries. In this work, a prior acidification of the pasty yeast was carried out, utilizing for that phosphoric acid, food grade, reducing yeast pH of about 5.30 to 2.20 and altering its characteristic from flocculent to pulverulent during beer fermentation. An increase of six times was observed in amount of yeast cells in suspension in the second fermentation stage regarding to fermentations by yeast with no prior acidification. With alteration on two input variables, temperature curve and cell multiplication, which goal was to minimize the maximum values for diketones detected in the fermenter tank, a reduction was obtained from peak of formed diacetyl and consequently contributed to reduction in fermentation time and total process time. Several experiments were performed with those process changes in order to verify the influence on the total fermentation time and total vicinal diketones concentration at the end of fermentation. This experiment reached as the best production result a total fermentation time of 151 hours and total vicinal diketone concentration of 0.08 ppm. The mass of yeast in suspension in the second phase of fermentation increased from 2.45 x 106 to 16.38 x 106 cells/mL of yeast, which fact is key to a greater efficiency in reducing total vicinal diketones existing in the medium, confirming that the prior yeast acidification, as well as the control of temperature and yeast cell multiplication in fermentative process enhances the performance of diketones reduction and consequently reduce the total fermentation time with diketones concentration below the expected value (Max: 0.10 ppm)
Resumo:
Expanded Bed Adsorption (EBA) is an integrative process that combines concepts of chromatography and fluidization of solids. The many parameters involved and their synergistic effects complicate the optimization of the process. Fortunately, some mathematical tools have been developed in order to guide the investigation of the EBA system. In this work the application of experimental design, phenomenological modeling and artificial neural networks (ANN) in understanding chitosanases adsorption on ion exchange resin Streamline® DEAE have been investigated. The strain Paenibacillus ehimensis NRRL B-23118 was used for chitosanase production. EBA experiments were carried out using a column of 2.6 cm inner diameter with 30.0 cm in height that was coupled to a peristaltic pump. At the bottom of the column there was a distributor of glass beads having a height of 3.0 cm. Assays for residence time distribution (RTD) revelead a high degree of mixing, however, the Richardson-Zaki coefficients showed that the column was on the threshold of stability. Isotherm models fitted the adsorption equilibrium data in the presence of lyotropic salts. The results of experiment design indicated that the ionic strength and superficial velocity are important to the recovery and purity of chitosanases. The molecular mass of the two chitosanases were approximately 23 kDa and 52 kDa as estimated by SDS-PAGE. The phenomenological modeling was aimed to describe the operations in batch and column chromatography. The simulations were performed in Microsoft Visual Studio. The kinetic rate constant model set to kinetic curves efficiently under conditions of initial enzyme activity 0.232, 0.142 e 0.079 UA/mL. The simulated breakthrough curves showed some differences with experimental data, especially regarding the slope. Sensitivity tests of the model on the surface velocity, axial dispersion and initial concentration showed agreement with the literature. The neural network was constructed in MATLAB and Neural Network Toolbox. The cross-validation was used to improve the ability of generalization. The parameters of ANN were improved to obtain the settings 6-6 (enzyme activity) and 9-6 (total protein), as well as tansig transfer function and Levenberg-Marquardt training algorithm. The neural Carlos Eduardo de Araújo Padilha dezembro/2013 9 networks simulations, including all the steps of cycle, showed good agreement with experimental data, with a correlation coefficient of approximately 0.974. The effects of input variables on profiles of the stages of loading, washing and elution were consistent with the literature
Resumo:
In this thesis we study some problems related to petroleum reservoirs using methods and concepts of Statistical Physics. The thesis could be divided percolation problem in random multifractal support motivated by its potential application in modelling oil reservoirs. We develped an heterogeneous and anisotropic grid that followin two parts. The first one introduce a study of the percolations a random multifractal distribution of its sites. After, we determine the percolation threshold for this grid, the fractal dimension of the percolating cluster and the critical exponents ß and v. In the second part, we propose an alternative systematic of modelling and simulating oil reservoirs. We introduce a statistical model based in a stochastic formulation do Darcy Law. In this model, the distribution of permeabilities is localy equivalent to the basic model of bond percolation
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points. In oil recovery terminology, the given single point can be mapped to an injection well (injector) and the multiple other points to production wells (producers). In the previously standard case of one injection well and one production well separated by Euclidean distance r, the distribution of shortest paths l, P(l|r), shows a power-law behavior with exponent gl = 2.14 in 2D. Here we analyze the situation of one injector and an array A of producers. Symmetric arrays of producers lead to one peak in the distribution P(l|A), the probability that the shortest path between the injector and any of the producers is l, while the asymmetric configurations lead to several peaks in the distribution. We analyze configurations in which the injector is outside and inside the set of producers. The peak in P(l|A) for the symmetric arrays decays faster than for the standard case. For very long paths all the studied arrays exhibit a power-law behavior with exponent g ∼= gl.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
We study magnetic interface roughness in F/AF bilayers. Two kinds of roughness were considered. The first one consists of isolated defects that divide the substrate in two regions, each one with an AF sub-lattice. The interface exchange coupling is considered uniform and presents a sudden change in the defects line, favoring Neel wall nucleation. Our results show the interface field dependence of the threshold thickness for the reorientation of the magnetization in the ferromagnetic film. Angular profiles show the relaxation of the magnetization, from Neel wall, at the interface, to reoriented state, at the surface. External magnetic field, perpendicular to the easy axis of the substrate, favors the reoriented state. Depending, of the external magnetic field intensity, parallel to the easy axis of the AF, the magnetization profile at surface can be parallel or perpendicular to the field direction. The second one treats of distributed deffects, periodically. The shape hysteresis curves, exchange bias and coercivity were characterized by interface field intensity and roughness pattern. Our results show that dipolar effects decrease the exchange bias and coercivity
Resumo:
The Patellofemoral pain syndrome is defined as a fore or retro patellar pain and it has multifactorial etiology, where the bad patellar alignment is the most acceptable hypothesis. However proximal factors to the knee, as the debility of the muscles of the hip, have been demonstrated as a contributing factor to the appearing of that syndrome. Purpose: To evaluate if exists a relation between the hip muscles performance and the development of the SDPF. Methods: Thirty women took part in this study. They were divided in two groups; a control group (fifteen asymptomatic subjects) and an experimental group (fifteen subjects with the diagnosis of SDPF). The muscle performance was evaluated in an isokinetic dynamometer, where it was verified the peak torque (PT), PT to body weight, PT time and the agonist/antagonist relation. It was also analyzed the electromyographic activity of the middle gluteus. The data was analyzed by the not paired t test at a significance level of 5%. Results:. Didn t have significant difference to the PT of the abductor muscles (p = 0,46) and lateral rotators of the hip (p = 0,17) between groups. Also didn t have significant difference to the PT values by the body weight, to these muscle groups either (p = 0,10 e p = 0,11, respectively). Didn t have significant difference between the amplitude of the signal (p = 0,05) and the onset of medium gluteus (p = 0,25) between the groups. Conclusion: In the experimental conditions realized, the study didn t demonstrate a relation between performance the hip muscles behavior and the development of the SDPF
Resumo:
Introduction: The ability to walk is impaired in obese by anthropometric factors (BMI and height), musculoskeletal pain and level of inactivity. Little is known about the influence of body adiposity and the acute response of the cardiovascular system during whole the 6-minute walk test (6mWT). Objective: To evaluate the effect of anthropometric measures (BMI and WHR waist-to-hip ratio), the effort heart and inactivity in ability to walk the morbidly obese. Materials and Methods: a total 36 morbidly obese (36.23 + 11.82 years old, BMI 49.16 kg/m2) were recruited from outpatient department of treatment of obesity and bariatric surgery in University Hospital Onofre Lopes and anthropometric measurements of obesity (BMI and WHR), pulmonary function, pattern habitual physical activity (Baecke Questionnaire) and walking capacity (6mWT). The patient was checking to measure: heart rate (HR), breathing frequency (BF), peripheral oxygen saturation, level of perceived exertion, systemic arterial pressure and duplo-produto (DP), moreover the average speed development and total distance walking. The data were analysed between gender and pattern of body adiposity, measuring the behavior minute by minute of walking. The Pearson and Spearmam correlation coefficients were calculated, and stepwise multiple Regression examined the predictors of walking capacity. All analyses were performed en software Statistic 6.0. Results: 20 obese patients had abdominal adiposity (WHR = 1.01), waist circumference was 135.8 cm in women (25) and 139.8 cm in men (10). Walked to the end of 6mWT 412.43 m, with no differences between gender and adiposity. The total distance walked by obesity alone was explained by BMI (45%), HR in the sixth minute (43%), the Baecke (24%) and fatigue (-23%). 88.6% of obese (31) performed the test above 60% of maximal HR, while the peak HR achieved at 5-minute of 6mWT. Systemic arterial pressure and DP rised after walking, but with no differences between gender and adiposity. Conclusion: The walk of obese didn´t suffers influence of gender or the pattern of body adiposity. The final distance walked is attributed to excess body weight, stress heart, the feeling of effort required by physical activity and level of sedentary to obese. With a minute of walking, the obeses achieved a range of intensity cardiovascular trainning