29 resultados para Prostatectomia : Métodos
Resumo:
The manufacture of prostheses for lower limb amputees (transfemural and transtibial) requires the preparation of a cartridge with appropriate and custom fit to the profile of each patient. The traditional process to the patients, mainly in public hospitals in Brazil, begins with the completion of a form where types of equipment, plugins, measures, levels of amputation etc. are identified. Currently, such work is carried out manually using a common metric tape and caliper of wood to take the measures of the stump, featuring a very rudimentary, and with a high degree of uncertainty geometry of the final product. To address this problem, it was necessary to act in two simultaneously and correlated directions. Originally, it was developed an integrated tool for viewing 3D CAD for transfemoral types of prostheses and transtibial called OrtoCAD I. At the same time, it was necessary to design and build a reader Mechanical equipment (sort of three-dimensional scanner simplified) able to obtain, automatically and with accuracy, the geometric information of either of the stump or the healthy leg. The methodology includes the application of concepts of reverse engineering to computationally generate the representation of the stump and/or the reverse image of the healthy member. The materials used in the manufacturing of prostheses nor always obey to a technical scientific criteria, because, if by one way it meets the criteria of resistance, by the other, it brings serious problems mainly due to excess of weight. This causes to the user various disorders due to lack of conformity. That problem was addressed with the creation of a hybrid composite material for the manufacture of cartridges of prostheses. Using the Reader Fitter and OrtoCAD, the new composite material, which aggregates the mechanical properties of strength and rigidity on important parameters such as low weight and low cost, it can be defined in its better way. Besides, it brings a reduction of up steps in the current processes of manufacturing or even the feasibility of using new processes, in the industries, in order to obtain the prostheses. In this sense, the hybridization of the composite with the combination of natural and synthetic fibers can be a viable solution to the challenges offered above
Resumo:
In this work it was synthesized and characterized the cobalt ferrite (CoFe2O4) by two methods: complexation combining EDTA/Citrate and hydrothermal investigating the influence of the synthesis conditions on phase formation and on the crystallite size. The powders were mainly characterized by x-ray diffraction. In specific cases, it was also used scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), x-ray fluorescence (XRF) and isotherms of adsorption and desorption of nitrogen (BET method). The study of the crystallite size was based on the interpretation of x-ray diffractograms obtained and estimated by the method of Halder-Wagner-Scherrer and Langford. An experimental design was made in order to assist in quantifying the influence of synthesis conditions on the response variables. The synthesis parameters evaluated in this study were: pH of the reaction medium (8, 9 and 10), the calcination temperature (combined complexation method EDTA/Citrate 600°C, 800°C and 1000°C), synthesis temperature (hydrothermal method 120°C, 140°C and 160°C), calcination time (combined complexation method EDTA/Citrate - 2, 4 and 6 hours) and time of synthesis (hydrothermal method 6, 15 and 24 hours). By the hydrothermal method was possible to produce mesoporous powders with high purity, with an average crystallite size up to 7 nm, with a surface area of 113.44 m²/g in the form of pellets with irregular morphology. By using the method of combined complexation EDTA/Citrate, mesoporous powders were produced with greater purity, crystallite size up to 22nm and 27.95 m²/g of surface area in the form of pellets with a regular morphology of plaques. In the experimental design was found that the hydrothermal method to all the studied parameters (pH, temperature and time) have significant effect on the crystallite size, while to the combined complexation method EDTA/Citrate, only temperature and time were significant
Resumo:
In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.
Resumo:
Oral and facial bone defects can undertake appearance, psychosocial well-being and stomathognatic function of its patients. Over the yerars several strategies for bone defect regeneration have arised to treat these pathologies, among them the use of frozen and irradiated bone allograft. Manipulation of bone grafts it s not determined yet, and several osteotomy alternatives can be observed. The present work evaluated with a microscope the bone fragments obtained from different osteotomy methods and irrigation on rings and blocks allografts irradiated and frozen at 80° negative in a rabbit model. The study is experimental in vitro and it sample was an adult male New Zealand rabbit. The animal was sacrificed to obtain long bones, that were submitted to freezing at 80º negative and irradiated with Cobalt- 60. Then the long bones were sectioned into 24 bone pieces, divided into 4 groups: G1 (n=06) osteotomy was performed with bur No. 6 forming rings with 5 mm thickness with high-speed handpiece with manual irrigation; G2 (n=06) osteotomy was performed with bur No. 6 forming rings with 5 mm thick with surgical motor with a manual irrigation rotation 1500 rpm; GA (n=06), osteotomy with trephine using manual irrigation with saline; and GB (n=06), osteotomy with trephine using saline from peristaltic pumps of surgical motor. Five bone pieces of each group were prepared for analysis on light microscopy (LM) and one on electronic scan electronic microscopy (SEM). On the SEM analysis edges surface, presence of microcracks and Smear Layer were evaluated. Analyzing osteotomy technics on SEM was observed: increased presence of microcracks cutting with high speed; increased presence of areas covered by Smear Layer when cutting with motor implant. The irrigation analysis with SEM was observed: that the presence of microcracks does not depend on the type of irrigation; on manual irrigation, there was greater discrepancy between the cutting lines. The descriptive analysis of the osteotomy and irrigation process on LM showed: histological analysis showing the bony margins with clear tissue changed layer, composed of blackened tissue of charred appearance near to the cortical bone; on the edges of the bony part, bone fragments that were displaced during the bone cut and bone irregularities were observed. After analysis of results we can conclude: that there was greater regularity of the bone cut using high-speed handpiece than using motor implant; the cut with trephine using saline irrigated from peristaltic pumps of surgical motor showed greater homogeneity when compared with manual irrigation; charred tissue was found in all obtained bone samples, whit no significant statistically difference on the proportion of carbonization of the two analysed technics
Resumo:
We analyzed the quality of raw milk from eight dairy farms in Rio Grande do Norte stored in a cooling tank , in order to evaluate methods for determining somatic cell counts (SCC). The Somaticell® kit and a portable Direct Cell Counter (DCC) were compared with each other and with the MilkoScanTM FT+ (FOSS Denmark), which uses Fourier Transform Infrared (FTIR) spectroscopy). Direct cell counter data were processed for somatic cell scores (log-transformed somatic cell count) and analyzed with the SAS®, statistical package , Statistical Analysis System, (SAS, INSTITUTE, 1998). Comparison of means and correlation of somatic cell scores were conducted using Pearson s correlation coefficient and the Tukey Test at 1 %. No significant difference was observed for comparison of means. The correlation between somatic cell scores was significant, that is, 0.907 and 0.876 between the MilkoScanTM FT+ and the Somaticell® kit and Direct Cell Count (DCC) respectively, and 0.943 between the Somaticell® kit and Direct Cell Count (DCC). The methods can be recommended for monitoring the quality of raw milk kept in a cooling tank in the production unit
Resumo:
In this work, the quantitative analysis of glucose, triglycerides and cholesterol (total and HDL) in both rat and human blood plasma was performed without any kind of pretreatment of samples, by using near infrared spectroscopy (NIR) combined with multivariate methods. For this purpose, different techniques and algorithms used to pre-process data, to select variables and to build multivariate regression models were compared between each other, such as partial least squares regression (PLS), non linear regression by artificial neural networks, interval partial least squares regression (iPLS), genetic algorithm (GA), successive projections algorithm (SPA), amongst others. Related to the determinations of rat blood plasma samples, the variables selection algorithms showed satisfactory results both for the correlation coefficients (R²) and for the values of root mean square error of prediction (RMSEP) for the three analytes, especially for triglycerides and cholesterol-HDL. The RMSEP values for glucose, triglycerides and cholesterol-HDL obtained through the best PLS model were 6.08, 16.07 e 2.03 mg dL-1, respectively. In the other case, for the determinations in human blood plasma, the predictions obtained by the PLS models provided unsatisfactory results with non linear tendency and presence of bias. Then, the ANN regression was applied as an alternative to PLS, considering its ability of modeling data from non linear systems. The root mean square error of monitoring (RMSEM) for glucose, triglycerides and total cholesterol, for the best ANN models, were 13.20, 10.31 e 12.35 mg dL-1, respectively. Statistical tests (F and t) suggest that NIR spectroscopy combined with multivariate regression methods (PLS and ANN) are capable to quantify the analytes (glucose, triglycerides and cholesterol) even when they are present in highly complex biological fluids, such as blood plasma
Resumo:
Soil contamination by pesticides is an environmental problem that needs to be monitored and avoided. However, the lack of fast, accurate and low cost analytical methods for discovering residual pesticide in complex matrices, such as soil, is a problem still unresolved. This problem needs to be solved before we are able to assess the quality of environmental samples. The intensive use of pesticides has increased since the 60s, because the dependence of their use, causing biological imbalances and promoting resistance and recurrence of high populations of pests and pathogens (upwelling). This has contributed to the appearance of new pests that were previously under natural control. To develop analytical methods that are able to quantify residues pesticide in complex environment. It is still a challenge for many laboratories. The integration of two analytical methods one ecotoxicological and another chemical demonstrates the potential for environmental analysis of methamidophos. The aim of this study was to evaluate an ecotoxicological method as "screening" analytical methamidophos in the soil and perform analytical confirmation in the samples of the concentration of the analyte by chemical method LC-MS/MS In this work we tested two soils: a clayey and sandy, both in contact with the kinetic methamidophos model followed pseudo-second order. The clay soil showed higher absorption of methamidophos and followed the Freundlich model, while the sandy, the Langmuir model. The chemical method was validated LC-MS/MS satisfactory, showing all parameters of linearity, range, precision, accuracy, and sensitivity adequate. In chronic ecotoxicological tests with C. dubia, the NOEC was 4.93 and 3.24 for ng L-1 of methamidophos to elutriate assays of sandy and clay soils, respectively. The method for ecotoxicological levels was more sensitive than LC-MS/MS detection of methamidophos, loamy and sandy soils. However, decreasing the concentration of the standard for analytical methamidophos and adjusting for the validation conditions chemical acquires a limit of quantification (LOQ) in ng L-1, consistent with the provisions of ecotoxicological test. The methods described should be used as an analytical tool for methamidophos in soil, and the ecotoxicological analysis can be used as a "screening" and LC-MS/MS as confirmatory analysis of the analyte molecule, confirming the objectives of this work
Resumo:
The use of intelligent agents in multi-classifier systems appeared in order to making the centralized decision process of a multi-classifier system into a distributed, flexible and incremental one. Based on this, the NeurAge (Neural Agents) system (Abreu et al 2004) was proposed. This system has a superior performance to some combination-centered methods (Abreu, Canuto, and Santana 2005). The negotiation is important to the multiagent system performance, but most of negotiations are defined informaly. A way to formalize the negotiation process is using an ontology. In the context of classification tasks, the ontology provides an approach to formalize the concepts and rules that manage the relations between these concepts. This work aims at using ontologies to make a formal description of the negotiation methods of a multi-agent system for classification tasks, more specifically the NeurAge system. Through ontologies, we intend to make the NeurAge system more formal and open, allowing that new agents can be part of such system during the negotiation. In this sense, the NeurAge System will be studied on the basis of its functioning and reaching, mainly, the negotiation methods used by the same ones. After that, some negotiation ontologies found in literature will be studied, and then those that were chosen for this work will be adapted to the negotiation methods used in the NeurAge.
Resumo:
The objective of the researches in artificial intelligence is to qualify the computer to execute functions that are performed by humans using knowledge and reasoning. This work was developed in the area of machine learning, that it s the study branch of artificial intelligence, being related to the project and development of algorithms and techniques capable to allow the computational learning. The objective of this work is analyzing a feature selection method for ensemble systems. The proposed method is inserted into the filter approach of feature selection method, it s using the variance and Spearman correlation to rank the feature and using the reward and punishment strategies to measure the feature importance for the identification of the classes. For each ensemble, several different configuration were used, which varied from hybrid (homogeneous) to non-hybrid (heterogeneous) structures of ensemble. They were submitted to five combining methods (voting, sum, sum weight, multiLayer Perceptron and naïve Bayes) which were applied in six distinct database (real and artificial). The classifiers applied during the experiments were k- nearest neighbor, multiLayer Perceptron, naïve Bayes and decision tree. Finally, the performance of ensemble was analyzed comparatively, using none feature selection method, using a filter approach (original) feature selection method and the proposed method. To do this comparison, a statistical test was applied, which demonstrate that there was a significant improvement in the precision of the ensembles
Resumo:
The activity of requirements engineering is seen in agile methods as bureaucratic activity making the process less agile. However, the lack of documentation in agile development environment is identified as one of the main challenges of the methodology. Thus, it is observed that there is a contradiction between what agile methodology claims and the result, which occurs in the real environment. For example, in agile methods the user stories are widely used to describe requirements. However, this way of describing requirements is still not enough, because the user stories is an artifact too narrow to represent and detail the requirements. The activities of verifying issues like software context and dependencies between stories are also limited with the use of only this artifact. In the context of requirements engineering there are goal oriented approaches that bring benefits to the requirements documentation, including, completeness of requirements, analysis of alternatives and support to the rationalization of requirements. Among these approaches, it excels the i * modeling technique that provides a graphical view of the actors involved in the system and their dependencies. This work is in the context of proposing an additional resource that aims to reduce this lack of existing documentation in agile methods. Therefore, the objective of this work is to provide a graphical view of the software requirements and their relationships through i * models, thus enriching the requirements in agile methods. In order to do so, we propose a set of heuristics to perform the mapping of the requirements presented as user stories in i * models. These models can be used as a form of documentation in agile environment, because by mapping to i * models, the requirements will be viewed more broadly and with their proper relationships according to the business environment that they will meet
Resumo:
Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets
Resumo:
The monitoring of Earth dam makes use of visual inspection and instrumentation to identify and characterize the deterioration that compromises the security of earth dams and associated structures. The visual inspection is subjective and can lead to misinterpretation or omission of important information and, some problems are detected too late. The instrumentation are efficient but certain technical or operational issues can cause restrictions. Thereby, visual inspections and instrumentation can lead to a lack of information. Geophysics offers consolidated, low-cost methods that are non-invasive, non-destructive and low cost. They have a strong potential and can be used assisting instrumentation. In the case that a visual inspection and strumentation does not provide all the necessary information, geophysical methods would provide more complete and relevant information. In order to test these theories, geophysical acquisitions were performed using Georadar (GPR), Electric resistivity, Seismic refraction, and Refraction Microtremor (ReMi) on the dike of the dam in Sant Llorenç de Montgai, located in the province of Lleida, 145 km from Barcelona, Catalonia. The results confirmed that the geophysical methods used each responded satisfactorily to the conditions of the earth dike, the anomalies present and the geological features found, such as alluvium and carbonate and evaporite rocks. It has also been confirmed that these methods, when used in an integrated manner, are able to reduce the ambiguities in individual interpretations. They facilitate improved imaging of the interior dikes and of major geological features, thus inspecting the massif and its foundation. Consequently, the results obtained in this study demonstrated that these geophysical methods are sufficiently effective for inspecting earth dams and they are an important tool in the instrumentation and visual inspection of the security of the dams
Resumo:
This work has as main objective to find mathematical models based on linear parametric estimation techniques applied to the problem of calculating the grow of gas in oil wells. In particular we focus on achieving grow models applied to the case of wells that produce by plunger-lift technique on oil rigs, in which case, there are high peaks in the grow values that hinder their direct measurement by instruments. For this, we have developed estimators based on recursive least squares and make an analysis of statistical measures such as autocorrelation, cross-correlation, variogram and the cumulative periodogram, which are calculated recursively as data are obtained in real time from the plant in operation; the values obtained for these measures tell us how accurate the used model is and how it can be changed to better fit the measured values. The models have been tested in a pilot plant which emulates the process gas production in oil wells
Resumo:
Este trabalho tem como objetivo o estudo do comportamento assintótico da estatística de Pearson (1900), que é o aparato teórico do conhecido teste qui-quadrado ou teste x2 como também é usualmente denotado. Inicialmente estudamos o comportamento da distribuição da estatística qui-quadrado de Pearson (1900) numa amostra {X1, X2,...,Xn} quando n → ∞ e pi = pi0 , 8n. Em seguida detalhamos os argumentos usados em Billingley (1960), os quais demonstram a convergência em distribuição de uma estatística, semelhante a de Pearson, baseada em uma amostra de uma cadeia de Markov, estacionária, ergódica e com espaço de estados finitos S