14 resultados para Electric engineering.
em Repositório Institucional da Universidade Tecnológica Federal do Paraná (RIUT)
Resumo:
This thesis aims to investigate the interaction of acoustic waves and fiber Bragg gratings (FBGs) in standard and suspended-core fibers (SCFs), to evaluate the influence of the fiber, grating and modulator design on the increase of the modulation efficiency, bandwidth and frequency. Initially, the frequency response and the resonant acoustic modes of a low frequency acousto-optic modulator (f < 1.2 MHz) are numerically investigated by using the finite element method. Later, the interaction of longitudinal acoustic waves and FBGs in SCFs is also numerically investigated. The fiber geometric parameters are varied and the strain and grating properties are simulated by means of the finite element method and the transfer matrix method. The study indicates that the air holes composing the SCF cause a significant reduction of the amount of silica in the fiber cross section increasing acousto-optic interaction in the core. Experimental modulation of the reflectivity of FBGs inscribed in two distinct SCFs indicates evidences of this increased interaction. Besides, a method to acoustically induce a dynamic phase-shift in a chirped FBG employing an optimized design of modulator is shown. Afterwards, a combination of this modulator and a FBG inscribed in a three air holes SCF is applied to mode-lock an ytterbium doped fiber laser. To improve the modulator design for future applications, two other distinct devices are investigated to increase the acousto-optic interaction, bandwidth and frequency (f > 10 MHz). A high reflectivity modulation has been achieved for a modulator based on a tapered fiber. Moreover, an increased modulated bandwidth (320 pm) has been obtained for a modulator based on interaction of a radial long period grating (RLPG) and a FBG inscribed in a standard fiber. In summary, the results show a considerable reduction of the grating/fiber length and the modulator size, indicating possibilities for compact and faster acousto-optic fiber devices. Additionally, the increased interaction efficiency, modulated bandwidth and frequency can be useful to shorten the pulse width of future all-fiber mode-locked fiber lasers, as well, to other photonic devices which require the control of the light in optical fibers by electrically tunable acoustic waves.
Resumo:
An ideal biomaterial for dental implants must have very high biocompatibility, which means that such materials should not provoke any serious adverse tissue response. Also, used metal alloys must have high fatigue resistance due the masticatory force and good corrosion resistance. These properties are rendered by using alpha and beta stabilizers, such as Al, V, Ni, Fe, Cr, Cu, Zn. Commercially pure titanium (TiCP) is used often for dental and orthopedic implants manufacturing. However, sometimes other alloys are employed and consequently it is essential to research the chemical elements present in those alloys that could bring prejudice for the health. Present work investigated TiCP metal alloys used for dental implant manufacturing and evaluated the presence of stabilizing elements within existing limits and standards for such materials. For alloy characterization and identification of stabilizing elements it was used EDXRF technique. This method allows to perform qualitative and quantitative analysis of the materials using the spectra of the characteristic X-rays emitted by the elements present in the metal samples. The experimental setup was based on two X- rays tubes (AMPTEK Mini X model with Ag and Au targets), a X-123SDD detector (AMPTEK) and a 0.5mm Cu collimator, developed due to the sample characteristics. The other experimental setup used as a complementary technique is composed of an X-ray tube with a Mo target, collimator 0.65mm and XFlash (SDD) detector - ARTAX 200 (BRUKER). Other method for elemental characterization by energy dispersive spectroscopy (EDS) applied in present work was based on Scanning Electron Microscopy (SEM) EVO® (Zeeis). This method also was used to evaluate the surface microstructure of the sample. The percentual of Ti obtained in the elementary characterization was among 93.35 ± 0.17% and 95.34 ± 0.19 %. These values are considered below the reference limit of 98.635% to 99.5% for TiCP, established by Association of metals centric materials engineers and scientists Society (ASM). The presence of elements Al and V in all samples also contributed to underpin the fact that are not TiCP implants. The values for Al vary between 6.3 ± 1.3% and 3.7 ± 2.0% and for V, between 0.26 ± 0.09% and 0.112 ± 0.048%. According to the American Society for Testing and Materials (ASTM), these elements should not be present in TiCP and in accordance with the National Institute of Standards and Technology (NIST), the presence of Al should be <0.01% and V should be of 0.009 ± 0.001%. Obtained results showed that implant materials are not exactly TiCP but, were manufactured using Ti-Al-V alloy, which contained Fe, Ni, Cu and Zn. The quantitative analysis and elementary characterization of experimental results shows that the best accuracy and precision were reached with X-Ray tube with Au target and collimator of 0.5 mm. Use of technique of EDS confirmed the results of EDXRF for Ti-Al-V alloy. Evaluating the surface microstructure by SEM of the implants, it was possible to infer that ten of the thirteen studied samples are contemporaneous, rough surface and three with machined surface.
Resumo:
The communication in vehicular ad hoc networks (VANETs) is commonly divided in two scenarios, namely vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). Aiming at establishing secure communication against eavesdroppers, recent works have proposed the exchange of secret keys based on the variation in received signal strength (RSS). However, the performance of such scheme depends on the channel variation rate, being more appropriate for scenarios where the channel varies rapidly, as is usually the case with V2V communication. In the communication V2I, the channel commonly undergoes slow fading. In this work we propose the use of multiple antennas in order to artificially generate a fast fading channel so that the extraction of secret keys out of the RSS becomes feasible in a V2I scenario. Numerical analysis shows that the proposed model can outperform, in terms of secret bit extraction rate, a frequency hopping-based method proposed in the literature.
Resumo:
This study presents a proposal of speed servomechanisms without the use of mechanical sensors (sensorless) using induction motors. A comparison is performed and propose techniques for pet rotor speed, analyzing performance in different conditions of speed and load. For the determination of control technique, initially, is performed an analysis of the technical literature of the main control and speed estimation used, with their characteristics and limitations. The proposed technique for servo sensorless speed induction motor uses indirect field-oriented control (IFOC), composed of four controllers of the proportional-integral type (PI): rotor flux controller, speed controller and current controllers in the direct and quadrature shaft. As the main focus of the work is in the speed control loop was implemented in Matlab the recursive least squares algorithm (RLS) for identification of mechanical parameters, such as moment of inertia and friction coefficient. Thus, the speed of outer loop controller gains can be self adjusted to compensate for any changes in the mechanical parameters. For speed estimation techniques are analyzed: MRAS by rotóricos fluxes MRAS by counter EMF, MRAS by instantaneous reactive power, slip, locked loop phase (PLL) and sliding mode. A proposition of estimation in sliding mode based on speed, which is performed a change in rotor flux observer structure is displayed. To evaluate the techniques are performed theoretical analyzes in Matlab simulation environment and experimental platform in electrical machinery drives. The DSP TMS320F28069 was used for experimental implementation of speed estimation techniques and check the performance of the same in a wide speed range, including load insertion. From this analysis is carried out to implement closed-loop control of sensorless speed IFOC structure. The results demonstrated the real possibility of replacing mechanical sensors for estimation techniques proposed and analyzed. Among these, the estimator based on PLL demonstrated the best performance in various conditions, while the technique based on sliding mode has good capacity estimation in steady state and robustness to parametric variations.
Medidas de concentração de radônio proveniente de argamassas de cimento portland, gesso e fosfogesso
Resumo:
Portland cement being very common construction material has in its composition the natural gypsum. To decrease the costs of manufacturing, the cement industry is substituting the gypsum in its composition by small quantities of phosphogypsum, which is the residue generated by the production of fertilizers and consists essentially of calcium dihydrate and some impurities, such as fluoride, metals in general, and radionuclides. Currently, tons of phosphogypsum are stored in the open air near the fertilizer industries, causing contamination of the environment. The 226 Ra present in these materials, when undergoes radioactive decay, produces the 222Rn gas. This radioactive gas, when inhaled together with its decay products deposited in the lungs, produces the exposure to radiation and can be a potential cause of lung cancer. Thus, the objective of this study was to measure the concentration levels of 222Rn from cylindrical samples of Portland cement, gypsum and phosphogypsum mortar from the state of Paraná, as well as characterizer the material and estimate the radon concentration in an environment of hypothetical dwelling with walls covered by such materials. Experimental setup of 222Rn activity measurements was based on AlphaGUARD detector (Saphymo GmbH). The qualitative and quantitative analysis was performed by gamma spectrometry and EDXRF with Au and Ag targets tubes (AMPTEK), and Mo target (ARTAX) and mechanical testing with x- ray equipment (Gilardoni) and the mechanical press (EMIC). Obtained average values of radon activity from studied materials in the air of containers were of 854 ± 23 Bq/m3, 60,0 ± 7,2 Bq/m3 e 52,9 ± 5,4 Bq/m3 for Portland cement, gypsum and phosphogypsum mortar, respectively. These results extrapolated into the volume of hypothetical dwelling of 36 m3 with the walls covered by such materials were of 3366 ± 91 Bq/m3, 237 ± 28 Bq/m3 e 208 ± 21 Bq/m3for Portland cement, gypsum and phosphogypsum mortar, respectively. Considering the limit of 300 Bq/m3 established by the ICRP, it could be concluded that the use of Portland cement plaster in dwellings is not secure and requires some specific mitigation procedure. Using the results of gamma spectrometry there were calculated the values of radium equivalent activity concentrations (Raeq) for Portland cement, gypsum and phosphogypsum mortar, which were obtained equal to 78,2 ± 0,9 Bq/kg; 58,2 ± 0,9 Bq/kg e 68,2 ± 0,9 Bq/kg, respectively. All values of radium equivalent activity concentrations for studied samples are below the maximum level of 370 Bq/kg. The qualitative and quantitative analysis of EDXRF spectra obtained with studied mortar samples allowed to evaluate quantitate and the elements that constitute the material such as Ca, S, Fe, and others.
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.
Resumo:
This work presents an application of optical fiber sensors based on Bragg gratings integrated to a transtibial prosthesis tube manufactured with a polymeric composite systrem of epoxy resin reinforced with glass fiber. The main objective of this study is to characterize the sensors applied to the gait cycle and changes in the gravity center of a transtibial amputee, trough the analysis of deformation and strengh of the transtibial prosthesis tube. For this investigation it is produced a tube of the composite material described above using the molding method of resin transfer (RTM) with four optical sensors. The prosthesis in which the original tube is replaced is classified as endoskeletal, has vacuum fitting, aluminium conector tube and carbon fiber foot cushioning. The volunteer for the tests was a man of 41 years old, 1.65 meters tall, 72 kilograms and left-handed. His amputation occurred due to trauma (surgical section is in the medial level, and was made below the left lower limb knee). He has been a transtibial prosthesis user for two years and eight months. The characterization of the optical sensors and analysis of mechanical deformation and tube resistance occurred through the gait cycle and the variation of the center of gravity of the body by the following tests: stand up, support leg without the prosthesis, support in the leg with the prosthesis, walk forward and walk backward. Besides the characterization of optical sensors during the gait cycle and the variation of the gravity center in a transtibial amputated, the results also showed a high degree of integration of the sensors in the composite and a high mechanical strength of the material.
Resumo:
The purpose of this work is to demonstrate and to assess a simple algorithm for automatic estimation of the most salient region in an image, that have possible application in computer vision. The algorithm uses the connection between color dissimilarities in the image and the image’s most salient region. The algorithm also avoids using image priors. Pixel dissimilarity is an informal function of the distance of a specific pixel’s color to other pixels’ colors in an image. We examine the relation between pixel color dissimilarity and salient region detection on the MSRA1K image dataset. We propose a simple algorithm for salient region detection through random pixel color dissimilarity. We define dissimilarity by accumulating the distance between each pixel and a sample of n other random pixels, in the CIELAB color space. An important result is that random dissimilarity between each pixel and just another pixel (n = 1) is enough to create adequate saliency maps when combined with median filter, with competitive average performance if compared with other related methods in the saliency detection research field. The assessment was performed by means of precision-recall curves. This idea is inspired on the human attention mechanism that is able to choose few specific regions to focus on, a biological system that the computer vision community aims to emulate. We also review some of the history on this topic of selective attention.
Resumo:
Spasticity is a common disorder in people who have upper motor neuron injury. The involvement may occur at different levels. The Modified Ashworth Scale (MAS) is the most used method to measure involvement levels. But it corresponds to a subjective evaluation. Mechanomyography (MMG) is an objective technique that quantifies the muscle vibration during the contraction and stretching events. So, it may assess the level of spasticity accurately. This study aimed to investigate the correlation between spasticity levels determined by MAS with MMG signal in spastic and not spastic muscles. In the experimental protocol, we evaluated 34 members of 22 volunteers, of both genders, with a mean age of 39.91 ± 13.77 years. We evaluated the levels of spasticity by MAS in flexor and extensor muscle groups of the knee and/or elbow, where one muscle group was the agonist and one antagonist. Simultaneously the assessment by the MAS, caught up the MMG signals. We used a custom MMG equipment to register and record the signals, configured in LabView platform. Using the MatLab computer program, it was processed the MMG signals in the time domain (median energy) and spectral domain (median frequency) for the three motion axes: X (transversal), Y (longitudinal) and Z (perpendicular). For bandwidth delimitation, we used a 3rd order Butterworth filter, acting in the range of 5-50 Hz. Statistical tests as Spearman's correlation coefficient, Kruskal-Wallis test and linear correlation test were applied. As results in the time domain, the Kruskal-Wallis test showed differences in median energy (MMGME) between MAS groups. The linear correlation test showed high linear correlation between MAS and MMGME for the agonist muscle as well as for the antagonist group. The largest linear correlation occurred between the MAS and MMG ME for the Z axis of the agonist muscle group (R2 = 0.9557) and the lowest correlation occurred in the X axis, for the antagonist muscle group (R2 = 0.8862). The Spearman correlation test also confirmed high correlation for all axes in the time domain analysis. In the spectral domain, the analysis showed an increase in the median frequency (MMGMF) in MAS’ greater levels. The highest correlation coefficient between MAS and MMGMF signal occurred in the Z axis for the agonist muscle group (R2 = 0.4883), and the lowest value occurred on the Y axis for the antagonist group (R2 = 0.1657). By means of the Spearman correlation test, the highest correlation occurred between the Y axis of the agonist group (0.6951; p <0.001) and the lowest value on the X axis of the antagonist group (0.3592; p <0.001). We conclude that there was a significantly high correlation between the MMGME and MAS in both muscle groups. Also between MMG and MAS occurred a significant correlation, however moderate for the agonist group, and low for the antagonist group. So, the MMGME proved to be more an appropriate descriptor to correlate with the degree of spasticity defined by the MAS.
Resumo:
The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.
Resumo:
One of the challenges to biomedical engineers proposed by researchers in neuroscience is brain machine interaction. The nervous system communicates by interpreting electrochemical signals, and implantable circuits make decisions in order to interact with the biological environment. It is well known that Parkinson’s disease is related to a deficit of dopamine (DA). Different methods has been employed to control dopamine concentration like magnetic or electrical stimulators or drugs. In this work was automatically controlled the neurotransmitter concentration since this is not currently employed. To do that, four systems were designed and developed: deep brain stimulation (DBS), transmagnetic stimulation (TMS), Infusion Pump Control (IPC) for drug delivery, and fast scan cyclic voltammetry (FSCV) (sensing circuits which detect varying concentrations of neurotransmitters like dopamine caused by these stimulations). Some softwares also were developed for data display and analysis in synchronously with current events in the experiments. This allowed the use of infusion pumps and their flexibility is such that DBS or TMS can be used in single mode and other stimulation techniques and combinations like lights, sounds, etc. The developed system allows to control automatically the concentration of DA. The resolution of the system is around 0.4 µmol/L with time correction of concentration adjustable between 1 and 90 seconds. The system allows controlling DA concentrations between 1 and 10 µmol/L, with an error about +/- 0.8 µmol/L. Although designed to control DA concentration, the system can be used to control, the concentration of other substances. It is proposed to continue the closed loop development with FSCV and DBS (or TMS, or infusion) using parkinsonian animals models.
Resumo:
This work presents the modeling and FPGA implementation of digital TIADC mismatches compensation systems. The development of the whole work follows a top-down methodology. Following this methodology was developed a two channel TIADC behavior modeling and their respective offset, gain and clock skew mismatches on Simulink. In addition was developed digital mismatch compensation system behavior modeling. For clock skew mismatch compensation fractional delay filters were used, more specifically, the efficient Farrow struct. The definition of wich filter design methodology would be used, and wich Farrow structure, required the study of various design methods presented in literature. The digital compensation systems models were converted to VHDL, for FPGA implementation and validation. These system validation was carried out using the test methodology FPGA In Loop . The results obtained with TIADC mismatch compensators show the high performance gain provided by these structures. Beyond this result, these work illustrates the potential of design, implementation and FPGA test methodologies.