998 resultados para Deputado federal, biografia, Brasil, 2011-2015
Resumo:
The objective of this study was to analyze the electrochemical and acid-base disorders in high performance athletes during the World Karate Championship hosted by the WKO (World Karate Organization) in 2014. In this study 19 male athletes were analyzed (age 34 ± 8), black belts and with over 5 years of experience in the sport. Capillary blood samples from the digital pulp of the finger were collected in three stages: rest, 5 minutes after and 10 minutes after fighting (kumite). The sample was analyzed using blood gas analyzer GEM Premier 3000, using the parameters pH, Na+, K+, Ca2+, lactate e HCO3−. The values related to acid-base disturbance presented statistical differences (p <0.05) in most of the collected moments. The lactate levels found were 2.77 ± 0.97mmol / L in rest, 6.57 ± 2.1 for 5 minutes after and 4.06 ± 1.55 for 10 minutes after combat. The samples collected for the electrolytic markers showed no statistical differences in their values (p <0.05). Through the data collected, we conjecture that the sport can be characterized as a high-intensity exercise and with a predominance of the glycolytic system. The analysis of acid-base disturbance is an efficient method to assist in the control of training loads.
Resumo:
This study has as general aim to propose a spatial map of doses as an auxiliary tool in assessing the need for optimization of the workplace in nuclear medicine services. As specific aims, we assessed the workers individual dosimetry; we analyzed the facilities of the nuclear medicine services; and we evaluated environment exposure rates. The research is characterized as a case study, with an exploratory and explanatory nature. It was conducted in three Nuclear Medicine Services, all established in the Northwest of the Paraná State. Results indicated that the evaluated dose rates and workers dosimetry, in all the dependencies of the surveyed services, are within the limits of annual doses. However some exceeded the limits recommended in the standard CNEN-NN 3:01 (2014). It was concluded that the spatial map dose is an important tool for nuclear medicine services because it facilitates the visualization of areas with highest concentration of radiation, and also helps in the constant review of these measures and resources, aiding in the identification of any failures and shortcomings, providing resources to correct any issues and prevent their repetition. The spatial map dose is also important for the regular inspection, evaluating if the radiation protection objectives are being met.
Resumo:
Acompanha: Manual didático: o emprego de aspectos sociocientíficos no ensino de química
Resumo:
The Brazilian agricultural research agency has, over the years, contributed to solve social problems and to promote new knowledge, incorporating new advances and seeking technological independence of the country, through the transfer of knowledge and technology generated. However, the process of transfering of knowledge and technology has represented a big challenge for public institutions. The Embrapa is the largest and main brazilian agricultural research company, with a staff of 9.790 employees, being 2.440 researchers and an annual budget of R$ 2.52 billion. Operates through 46 decentralized research units, and coordinate of the National Agricultural Research System - SNPA. Considering that technology transfer is the consecration of effort and resources spent for the generation of knowledge and the validity of the research, this work aims to conduct an assessment of the performance of Embrapa Swine and Poultry along the production chain of broilers and propose a technology transfer model for this chain, which can be used by the Public Institutions Research – IPPs. This study is justified by the importance of agricultural research for the country, and the importance of the institution addressed. The methodology used was the case study with a qualitative approach, documentary and bibliographic research and interviews with use of semi-structured questionnaires. The survey was conducted in three stages. In the first stage, there was a diagnosis of the Technology Transfer Process (TT), the contribution of the Embrapa Swine and poultry for the supply chain for broiler. At this stage it was used bibliographical and documentary research and semi- structured interviews with agroindustrial broiler agents, researchers at Embrapa Swine and Poultry, professionals of technology transfer, from the Embrapa and Embrapa Swine and Poultry, managers of technology transfer and researchers from the Agricultural Research Service - ARS. In the second step, a model was developed for the technology transferring poultry process of Embrapa. In this phase, there were made documentary and bibliographic research and analysis of information obtained in the interviews. The third phase was to validate the proposed model in the various sectors of the broilers productive chain. The data show that, although the Embrapa Swine and Poultry develops technologies for broiler production chain, the rate of adoption of these technologies by the chain is very low. It was also diagnosed that there is a gap between the institution and the various links of the chain. It was proposed an observatory mechanism to approximate Embrapa Swine and Poultry and the agents of the broiler chain for identifying and discussing research priorities. The proposed model seeks to improve the interaction between the institution and the chain, in order to identify the chain real research demands and the search and the joint development of solutions for these demands. The proposed TT model was approved by a large majority (96.77%) of the interviewed agents who work in the various links in the chain, as well as by representatives (92%) of the entities linked to this chain. The acceptance of the proposed model demonstrates the willingness of the chain to approach Embrapa Swine and Poultry, and to seek joint solutions to existing problems.
Resumo:
This work aims to investigate the historical narratives in which the graphic designer Alexandre Wollner assembled about the development of its own profession in Brazil, focusing the ways in which his discourse points relations among design (with greater emphasis in graphic design) and visual arts, the industrial development and notions about technology. Firstly, the theoretical setup searched for dialogues with design historians, with Mikhail Bakhtin, specially his concepts about “ideology” and “discourse’, and the theory of Field Autonomy by Pierre Bourdieu applied in the artistic practice. Following, the relation between Wollner’s own journey and the Brazilian industrial development is shown, and, at last, three of his historical texts are studied, which are written in different moments (1964; 1983; 1998), being those in which the analyzed author wished to point out the origens, events and names that are more remarkable. Throughout the work, it is pointed the importance of Wollner’s contact with the modernist european ideologies that share an abstract and rationalist matrix found at Hochschule für Gestaltung Ulm (HfG Ulm), the german design school from the city of Ulm, in the 1950s. Such modernist discourse understood the practice of design as a method with scientific character, being then different of some other more recurring artistic professional practices in some productive sectors. Wollner aimed to apply such ideals in his professional practice, being the foundation of the paulista office forminform, in 1958, one of his first expressions of such posture, and in his academic practice, helping the foundation of the Escola Superior de Desenho Industrial (ESDI), in Rio de Janeiro, in 1963. Such modernist ideals went along with moments of the Brazilian industrial development during the government of Juscelino Kubitschek (1956–1961) and the “Economical Miracle” from the military government (1968–1973). Wollner argued about the need for the development of national design as a technological and productive differential that would help the growth of national industry, based on Ulm’s project model concept. It is defended that Wollner’s professional and intelectual path, in his efforts of thinking a history of Brazilian design through the choice of pioneers in the area, was founded on an “ideal model” of design, leaving aside the modernist experiences from the 1950s. Such posture would indicate a search for validation of his own profession that was beginning to become more evident in Brazilian productive means, aiming the creation of a differential space in comparison with pre-established practices, usually link to graphic artists from the time.
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.
Resumo:
Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2015.
Resumo:
The purpose of this work is to demonstrate and to assess a simple algorithm for automatic estimation of the most salient region in an image, that have possible application in computer vision. The algorithm uses the connection between color dissimilarities in the image and the image’s most salient region. The algorithm also avoids using image priors. Pixel dissimilarity is an informal function of the distance of a specific pixel’s color to other pixels’ colors in an image. We examine the relation between pixel color dissimilarity and salient region detection on the MSRA1K image dataset. We propose a simple algorithm for salient region detection through random pixel color dissimilarity. We define dissimilarity by accumulating the distance between each pixel and a sample of n other random pixels, in the CIELAB color space. An important result is that random dissimilarity between each pixel and just another pixel (n = 1) is enough to create adequate saliency maps when combined with median filter, with competitive average performance if compared with other related methods in the saliency detection research field. The assessment was performed by means of precision-recall curves. This idea is inspired on the human attention mechanism that is able to choose few specific regions to focus on, a biological system that the computer vision community aims to emulate. We also review some of the history on this topic of selective attention.
Resumo:
The rapid population growth is the great motivator for the development of the construction industry and the increased demand for drinking water, resulting in a gradual increase in the generation of solid waste. Thus, this work was carried out in order to recycle industrial and municipal wastes incorporating them into materials for civil construction. The composite produced from water treatment sludge and marble polishing mud, applying lime production waste as a binder, was evaluated for its mechanical performance and its morphological structure. The raw materials were characterized for their chemical composition, mineralogy, morphology, particle size and also the moisture content. With the featured materials nine compositions have been developed varying the content of the water treatment sludge between 25 to 50%, marble polishing mud between 35 to 50% and the lime production waste between 10 to 30%. The composites were subjected to mechanical strength tests, water absorption, chemical and mineralogical composition and morphology. The developed materials presented, on the 3rd day of hydration, maximum strength value of 4.65 MPa, the 7th day 6.36 MPa, on the 14th day 6.74 MPa, the 28th day 5.98 MPa, on the 60th day 8.52 MPa at 90th day 11.75 MPa and 180th day 12.06 MPa. The water absorption values after 28 days of hydration ranged from 16.27% to 26.32% and after 90 days, from 13.57% to 23.56%.
Resumo:
The textile industry generates a large volume of high organic effluent loading whoseintense color arises from residual dyes. Due to the environmental implications caused by this category of contaminant there is a permanent search for methods to remove these compounds from industrial waste waters. The adsorption alternative is one of the most efficient ways for such a purpose of sequestering/remediation and the use of inexpensive materials such as agricultural residues (e.g., sugarcane bagasse) and cotton dust waste (CDW) from weaving in their natural or chemically modified forms. The inclusion of quaternary amino groups (DEAE+) and methylcarboxylic (CM-) in the CDW cellulosic structure generates an ion exchange capacity in these formerly inert matrix and, consequently, consolidates its ability for electrovalent adsorption of residual textile dyes. The obtained ionic matrices were evaluated for pHpcz, the retention efficiency for various textile dyes in different experimental conditions, such as initial concentration , temperature, contact time in order to determine the kinetic and thermodynamic parameters of adsorption in batch, turning comprehensive how does occur the process, then understood from the respective isotherms. It was observed a change in the pHpcz for CM--CDW (6.07) and DEAE+-CDW (9.66) as compared to the native CDW (6.46), confirming changes in the total surface charge. The ionized matrices were effective for removing all evaluated pure or residual textile dyes under various tested experimental conditions. The kinetics of the adsorption process data had best fitted to the model a pseudosecond order and an intraparticle diffusion model suggested that the process takes place in more than one step. The time required for the system to reach equilibrium varied according to the initial concentration of dye, being faster in diluted solutions. The isotherm model of Langmuir was the best fit to the experimental data. The maximum adsorption capacity varied differently for each tested dye and it is closely related to the interaction adsorbent/adsorbate and dye chemical structure. Few dyes obtained a linear variation of the balance ka constant due to the inversion of temperature and might have influence form their thermodynamic behavior. Dyes that could be evaluated such as BR 18: 1 and AzL, showed features of an endothermic adsorption process (ΔH° positive) and the dye VmL presented exothermic process characteristics (ΔH° negative). ΔG° values suggested that adsorption occurred spontaneously, except for the BY 28 dye, and the values of ΔH° indicated that adsorption occurred by a chemisorption process. The reduction of 31 to 51% in the biodegradability of the matrix after the dye adsorption means that they must go through a cleaning process before being discarded or recycled, and the regeneration test indicates that matrices can be reused up to five times without loss of performance. The DEAE+-CDW matrix was efficient for the removal of color from a real textile effluent reaching an UV-Visible spectral area decrease of 93% when applied in a proportion of 15 g ion exchanger matrix L-1 of colored wastewater, even in the case of the parallel presence of 50 g L-1 of mordant salts in the waste water. The wide range of colored matter removal by the synthesized matrices varied from 40.27 to 98.65 mg g-1 of ionized matrix, obviously depending in each particular chemical structure of the dye upon adsorption.
Resumo:
The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.
Resumo:
One of the challenges to biomedical engineers proposed by researchers in neuroscience is brain machine interaction. The nervous system communicates by interpreting electrochemical signals, and implantable circuits make decisions in order to interact with the biological environment. It is well known that Parkinson’s disease is related to a deficit of dopamine (DA). Different methods has been employed to control dopamine concentration like magnetic or electrical stimulators or drugs. In this work was automatically controlled the neurotransmitter concentration since this is not currently employed. To do that, four systems were designed and developed: deep brain stimulation (DBS), transmagnetic stimulation (TMS), Infusion Pump Control (IPC) for drug delivery, and fast scan cyclic voltammetry (FSCV) (sensing circuits which detect varying concentrations of neurotransmitters like dopamine caused by these stimulations). Some softwares also were developed for data display and analysis in synchronously with current events in the experiments. This allowed the use of infusion pumps and their flexibility is such that DBS or TMS can be used in single mode and other stimulation techniques and combinations like lights, sounds, etc. The developed system allows to control automatically the concentration of DA. The resolution of the system is around 0.4 µmol/L with time correction of concentration adjustable between 1 and 90 seconds. The system allows controlling DA concentrations between 1 and 10 µmol/L, with an error about +/- 0.8 µmol/L. Although designed to control DA concentration, the system can be used to control, the concentration of other substances. It is proposed to continue the closed loop development with FSCV and DBS (or TMS, or infusion) using parkinsonian animals models.
Resumo:
In this work, a platform to the conditioning, digitizing, visualization and recording of the EMG signals was developed. After the acquisition, the analysis can be done by signal processing techniques. The platform consists of two modules witch acquire electromyography (EMG) signals by surface electrodes, limit the interest frequency band, filter the power grid interference and digitalize the signals by the analogue-to- digital converter of the modules microcontroller. Thereby, the data are sent to the computer by the USB interface by the HID specification, displayed in real-time in graphical form and stored in files. As processing resources was implemented the operations of signal absolute value, the determination of effective value (RMS), Fourier analysis, digital filter (IIR) and the adaptive filter. Platform initial tests were performed with signal of lower and upper limbs with the aim to compare the EMG signal laterality. The open platform is intended to educational activities and academic research, allowing the addition of other processing methods that the researcher want to evaluate or other required analysis.