999 resultados para numeric method
Resumo:
The main intend of this work, is to determinate the Specific Absorption Rate (SAR) on human head tissues exposed to radiation caused by sources of 900 and 1800MHz, since those are the typical frequencies for mobile communications systems nowadays. In order to determinate the SAR, has been used the FDTD (Finite Difference Time Domain), which is a numeric method in time domain, obtained from the Maxwell equations in differential mode. In order to do this, a computational model from the human head in two dimensions made with cells of the smallest possible size was implemented, respecting the limits from computational processing. It was possible to verify the very good efficiency of the FDTD method in the resolution of those types of problems.
Resumo:
This study proposes a new concept for upscaling local information on failure surfaces derived from geophysical data, in order to develop the spatial information and quickly estimate the magnitude and intensity of a landslide. A new vision of seismic interpretation on landslides is also demonstrated by taking into account basic geomorphic information with a numeric method based on the Sloping Local Base Level (SLBL). The SLBL is a generalization of the base level defined in geomorphology applied to landslides, and allows the calculation of the potential geometry of the landslide failure surface. This approach was applied to a large scale landslide formed mainly in gypsum and situated in a former glacial valley along the Rhone within the Western European Alps. Previous studies identified the existence of two sliding surfaces that may continue below the level of the valley. In this study. seismic refraction-reflexion surveys were carried out to verify the existence of these failure surfaces. The analysis of the seismic data provides a four-layer model where three velocity layers (<1000 ms(-1), 1500 ms(-1) and 3000 ms(-1)) are interpreted as the mobilized mass at different weathering levels and compaction. The highest velocity layer (>4000 ms(-1)) with a maximum depth of similar to 58 m is interpreted as the stable anhydrite bedrock. Two failure surfaces were interpreted from the seismic surveys: an upper failure and a much deeper one (respectively 25 and 50 m deep). The upper failure surface depth deduced from geophysics is slightly different from the results obtained using the SLBL, and the deeper failure surface depth calculated with the SLBL method is underestimated in comparison with the geophysical interpretations. Optimal results were therefore obtained by including the seismic data in the SLBL calculations according to the geomorphic limits of the landslide (maximal volume of mobilized mass = 7.5 x 10(6) m(3)).
Resumo:
In this article, it is represented by state variables phase a transmission line which parameters are considered frequency independently and frequency dependent. Based on previous analyses, it is used the reasonable number of p circuits and the number of blocks composed by parallel resistor and inductor for reduction of numerical oscillations. It is analyzed the influence of the increase of the RL parallel blocks in the obtained results. The RL parallel blocks are used for inclusion of the frequency influence in the transmission line longitudinal parameter. It is a simple model that is been used by undergraduate students for simulation of traveling wave phenomena in transmission lines. Considering the model without frequency influence, it is included a representation of the corona effect. Some simulations are carried considering the corona effect and they are compared to the results without this phenomenon.
Resumo:
In this article, it is represented by state variables phase a transmission line which parameters are considered frequency independently and frequency dependent. It is analyzed what is the reasonable number of pi circuits and the number of blocks composed by parallel resistor and inductor in parallel for reduction of numerical oscillations. It is simulated the numerical routine with and without the effect of frequency in the longitudinal parameters. Initially, it is used state variables and pi circuits representing the transmission line composing a linear system which is solved by numerical routines based on the trapezoidal rule. The effect of frequency on the line is synthesized by resistors and inductors in parallel and this representation is analyzed in details. It is described transmission lines and the frequency influence in these lines through the state variables.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Independientemente de la existencia de técnicas altamente sofisticadas y capacidades de cómputo cada vez más elevadas, los problemas asociados a los robots que interactúan con entornos no estructurados siguen siendo un desafío abierto en robótica. A pesar de los grandes avances de los sistemas robóticos autónomos, hay algunas situaciones en las que una persona en el bucle sigue siendo necesaria. Ejemplos de esto son, tareas en entornos de fusión nuclear, misiones espaciales, operaciones submarinas y cirugía robótica. Esta necesidad se debe a que las tecnologías actuales no pueden realizar de forma fiable y autónoma cualquier tipo de tarea. Esta tesis presenta métodos para la teleoperación de robots abarcando distintos niveles de abstracción que van desde el control supervisado, en el que un operador da instrucciones de alto nivel en la forma de acciones, hasta el control bilateral, donde los comandos toman la forma de señales de control de bajo nivel. En primer lugar, se presenta un enfoque para llevar a cabo la teleoperación supervisada de robots humanoides. El objetivo es controlar robots terrestres capaces de ejecutar tareas complejas en entornos de búsqueda y rescate utilizando enlaces de comunicación limitados. Esta propuesta incorpora comportamientos autónomos que el operador puede utilizar para realizar tareas de navegación y manipulación mientras se permite cubrir grandes áreas de entornos remotos diseñados para el acceso de personas. Los resultados experimentales demuestran la eficacia de los métodos propuestos. En segundo lugar, se investiga el uso de dispositivos rentables para telemanipulación guiada. Se presenta una aplicación que involucra un robot humanoide bimanual y un traje de captura de movimiento basado en sensores inerciales. En esta aplicación, se estudian las capacidades de adaptación introducidas por el factor humano y cómo estas pueden compensar la falta de sistemas robóticos de alta precisión. Este trabajo es el resultado de una colaboración entre investigadores del Biorobotics Laboratory de la Universidad de Harvard y el Centro de Automática y Robótica UPM-CSIC. En tercer lugar, se presenta un nuevo controlador háptico que combina velocidad y posición. Este controlador bilateral híbrido hace frente a los problemas relacionados con la teleoperación de un robot esclavo con un gran espacio de trabajo usando un dispositivo háptico pequeño como maestro. Se pueden cubrir amplias áreas de trabajo al cambiar automáticamente entre los modos de control de velocidad y posición. Este controlador háptico es ideal para sistemas maestro-esclavo con cinemáticas diferentes, donde los comandos se transmiten en el espacio de la tarea del entorno remoto. El método es validado para realizar telemanipulación hábil de objetos con un robot industrial. Por último, se introducen dos contribuciones en el campo de la manipulación robótica. Por un lado, se presenta un nuevo algoritmo de cinemática inversa, llamado método iterativo de desacoplamiento cinemático. Este método se ha desarrollado para resolver el problema cinemático inverso de un tipo de robot de seis grados de libertad donde una solución cerrada no está disponible. La eficacia del método se compara con métodos numéricos convencionales. Además, se ha diseñado una taxonomía robusta de agarres que permite controlar diferentes manos robóticas utilizando una correspondencia, basada en gestos, entre los espacios de trabajo de la mano humana y de la mano robótica. El gesto de la mano humana se identifica mediante la lectura de los movimientos relativos del índice, el pulgar y el dedo medio del usuario durante las primeras etapas del agarre. ABSTRACT Regardless of the availability of highly sophisticated techniques and ever increasing computing capabilities, the problems associated with robots interacting with unstructured environments remains an open challenge. Despite great advances in autonomous robotics, there are some situations where a humanin- the-loop is still required, such as, nuclear, space, subsea and robotic surgery operations. This is because the current technologies cannot reliably perform all kinds of task autonomously. This thesis presents methods for robot teleoperation strategies at different levels of abstraction ranging from supervisory control, where the operator gives high-level task actions, to bilateral teleoperation, where the commands take the form of low-level control inputs. These strategies contribute to improve the current human-robot interfaces specially in the case of slave robots deployed at large workspaces. First, an approach to perform supervisory teleoperation of humanoid robots is presented. The goal is to control ground robots capable of executing complex tasks in disaster relief environments under constrained communication links. This proposal incorporates autonomous behaviors that the operator can use to perform navigation and manipulation tasks which allow covering large human engineered areas of the remote environment. The experimental results demonstrate the efficiency of the proposed methods. Second, the use of cost-effective devices for guided telemanipulation is investigated. A case study involving a bimanual humanoid robot and an Inertial Measurement Unit (IMU) Motion Capture (MoCap) suit is introduced. Herein, it is corroborated how the adaptation capabilities offered by the human-in-the-loop factor can compensate for the lack of high-precision robotic systems. This work is the result of collaboration between researchers from the Harvard Biorobotics Laboratory and the Centre for Automation and Robotics UPM-CSIC. Thirdly, a new haptic rate-position controller is presented. This hybrid bilateral controller copes with the problems related to the teleoperation of a slave robot with large workspace using a small haptic device as master. Large workspaces can be covered by automatically switching between rate and position control modes. This haptic controller is ideal to couple kinematic dissimilar master-slave systems where the commands are transmitted in the task space of the remote environment. The method is validated to perform dexterous telemanipulation of objects with a robotic manipulator. Finally, two contributions for robotic manipulation are introduced. First, a new algorithm, the Iterative Kinematic Decoupling method, is presented. It is a numeric method developed to solve the Inverse Kinematics (IK) problem of a type of six-DoF robotic arms where a close-form solution is not available. The effectiveness of this IK method is compared against conventional numerical methods. Second, a robust grasp mapping has been conceived. It allows to control a wide range of different robotic hands using a gesture based correspondence between the human hand space and the robotic hand space. The human hand gesture is identified by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of grasping.
Resumo:
The dispersion of pollutants in the environment is an issue of great interest as it directly affects air quality, mainly in large cities. Experimental and numerical tools have been used to predict the behavior of pollutant species dispersion in the atmosphere. A software has been developed based on the control-volume based on the finite element method in order to obtain two-dimensional simulations of Navier-Stokes equations and heat or mass transportation in regions with obstacles, varying position of the pollutant source. Numeric results of some applications were obtained and, whenever possible, compared with literature results showing satisfactory accordance. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The importance of mechanical aspects related to cell activity and its environment is becoming more evident due to their influence in stem cell differentiation and in the development of diseases such as atherosclerosis. The mechanical tension homeostasis is related to normal tissue behavior and its lack may be related to the formation of cancer, which shows a higher mechanical tension. Due to the complexity of cellular activity, the application of simplified models may elucidate which factors are really essential and which have a marginal effect. The development of a systematic method to reconstruct the elements involved in the perception of mechanical aspects by the cell may accelerate substantially the validation of these models. This work proposes the development of a routine capable of reconstructing the topology of focal adhesions and the actomyosin portion of the cytoskeleton from the displacement field generated by the cell on a flexible substrate. Another way to think of this problem is to develop an algorithm to reconstruct the forces applied by the cell from the measurements of the substrate displacement, which would be characterized as an inverse problem. For these kind of problems, the Topology Optimization Method (TOM) is suitable to find a solution. TOM is consisted of an iterative application of an optimization method and an analysis method to obtain an optimal distribution of material in a fixed domain. One way to experimentally obtain the substrate displacement is through Traction Force Microscopy (TFM), which also provides the forces applied by the cell. Along with systematically generating the distributions of focal adhesion and actin-myosin for the validation of simplified models, the algorithm also represents a complementary and more phenomenological approach to TFM. As a first approximation, actin fibers and flexible substrate are represented through two-dimensional linear Finite Element Method. Actin contraction is modeled as an initial stress of the FEM elements. Focal adhesions connecting actin and substrate are represented by springs. The algorithm was applied to data obtained from experiments regarding cytoskeletal prestress and micropatterning, comparing the numerical results to the experimental ones
Resumo:
Background: The Western Ontario and McMaster Universities (WOMAC) Osteoarthritis Index is a previously described self-administered questionnaire covering three domains: pain, stiffness and function. It has been validated in patients with osteoarthritis (OA) of the hip or knee in a paper-based format. Aim: To validate the WOMAC 3.0 using a numerical rating scale in a computerized touch screen format allowing immediate evaluation of the questionnaire. In the computed version cartoons, written and audio instruments were included in order facilitate application. Methods: Fifty patients, demographically balanced, with radiographically proven primary hip or knee OA completed the classical paper and the new computerized WOMAC version. Subjects were randomized either to paper format or computerized format first to balance possible order effects, Results: The intra-class correlation coefficients for pain, stiffness and function values were 0.915, 0.745 and 0.940, respectively. The Spearman correlation coefficients for pain, stiffness and function were 0.88, 0.77 and 0.87, respectively. Conclusion: These data indicate that the computerized WOMAC OA index 3.0 is comparable to the paper WOMAC in all three dimensions. The computerized version would allow physicians to get an immediate result and if present a direct comparison with a previous exam. (C) 2002 OsteoArthritis Research Society International. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
This thesis was produced for the Technology Marketing unit at the Nokia Research Center. Technology marketing was a new function at Nokia Research Center, and needed an established framework with the capacity to take into account multiple aspects for measuring the team performance. Technology marketing functions had existed in other parts of Nokia, yet no single method had been agreed upon for measuring their performance. The purpose of this study was to develop a performance measurement system for Nokia Research Center Technology Marketing. The target was that Nokia Research Center Technology Marketing had a framework for separate metrics; including benchmarking for starting level and target values in the future planning (numeric values were kept confidential within the company). As a result of this research, the Balanced Scorecard model of Kaplan and Norton, was chosen for the performance measurement system for Nokia Research Center Technology Marketing. This research selected the indicators, which were utilized in the chosen performance measurement system. Furthermore, performance measurement system was defined to guide the Head of Marketing in managing Nokia Research Center Technology Marketing team. During the research process the team mission, vision, strategy and critical success factors were outlined.
Resumo:
This paper presents an analyze of numeric conditioning of the Hessian matrix of Lagrangian of modified barrier function Lagrangian method (MBFL) and primal-dual logarithmic barrier method (PDLB), which are obtained in the process of solution of an optimal power flow problem (OPF). This analyze is done by a comparative study through the singular values decomposition (SVD) of those matrixes. In the MBLF method the inequality constraints are treated by the modified barrier and PDLB methods. The inequality constraints are transformed into equalities by introducing positive auxiliary variables and are perturbed by the barrier parameter. The first-order necessary conditions of the Lagrangian function are solved by Newton's method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, allowing the limits of the inequality constraints to be reached. The electric systems IEEE 14, 162 and 300 buses were used in the comparative analysis. ©2007 IEEE.
Resumo:
This study aimed at developing radiographic techniques for the early detection of dyschondroplastic lesions in the tibia of broilers. The experiment was carried out at the facilities of UNIFOR/MG and Formiga and UNIFENAS/ Alfenas with 420 one-day-old male Cobb broilers. At 20 days of age, all birds were radiographed and identified with an alpha-numeric metal ring in the right leg. At 40 days of age, 42 broilers previously selected as a function of bone mineral density and lesion thickness scores were again radiographed and scored, and then sacrificed. Their right tibia was removed for gross and histological examination of the growth plate. The results showed that radiographic techniques are correlated with gross and histological examination and that there was no significant differences among techniques (P>0.05). it was concluded that the use of radiographic examination to identify tibial dyschondroplasia in broilers precludes the use of bone mineral density to diagnose this condition. The non-parametric statistical Chi-square test at 5% significance level was used to analyze the results.
Resumo:
A system for screening of nutritional risk is described. It is based on the concept that nutritional support is indicated in patients who are severely ill with increased nutritional requirements, or who are severely undernourished, or who have certain degrees of severity of disease in combination with certain degrees of undernutrition. Degrees of severity of disease and undernutrition were defined as absent, mild, moderate or severe from data sets in a selected number of randomized controlled trials (RCTs) and converted to a numeric score. After completion, the screening system was validated against all published RCTs known to us of nutritional support vs spontaneous intake to investigate whether the screening system could distinguish between trials with a positive outcome and trials with no effect on outcome.
Resumo:
Pesticides applications have been described by many researches as a very inefficient process. In some cases, there are reports that only 0.02% of the applied products are used for the effective control of the problem. The main factor that influences pesticides applications is the droplet size formed on spraying nozzles. Many parameters affects the dynamic of the droplets, like wind, temperature, relative humidity, and others. Small droplets are biologically more active, but they are affected by evaporation and drift. On the other hand, the great droplets do not promote a good distribution of the product on the target. In this sense, associated with the risk of non target areas contamination and with the high costs involved in applications, the knowledge of the droplet size is of fundamental importance in the application technology. When sophisticated technology for droplets analysis is unavailable, is common the use of artificial targets like water-sensitive paper to sample droplets. On field sampling, water-sensitive papers are placed on the trials where product will be applied. When droplets impinging on it, the yellow surface of this paper will be stained dark blue, making easy their recognition. Collected droplets on this papers have different kinds of sizes. In this sense, the determination of the droplet size distribution gives a mass distribution of the material and so, the efficience of the application of the product. The stains produced by droplets shows a spread factor proportional to their respectives initial sizes. One of methodologies to analyse the droplets is a counting and measure of the droplets made in microscope. The Porton N-G12 graticule, that shows equaly spaces class intervals on geometric progression of square 2, are coulpled to the lens of the microscope. The droplet size parameters frequently used are the Volumetric Median Diameter (VMD) and the Numeric Median Diameter. On VMD value, a representative droplets sample is divided in two equal parts of volume, in such away one part contains droplets of sizes smaller than VMD and the other part contains droplets of sizes greater that VMD. The same process is done to obtaining the NMD, which divide the sample in two equal parts in relation to the droplets size. The ratio between VMD and NMD allows the droplets uniformity evaluation. After that, the graphics of accumulated probability of the volume and size droplets are plotted on log scale paper (accumulated probability versus median diameter of each size class). The graphics provides the NMD on the x-axes point corresponding to the value of 50% founded on the y-axes. All this process is very slow and subjected to operator error. So, in order to decrease the difficulty envolved with droplets measuring it was developed a numeric model, implemented on easy and accessfull computational language, which allows approximate VMD and NMD values, with good precision. The inputs to this model are the frequences of the droplets sizes colected on the water-sensitive paper, observed on the Porton N-G12 graticule fitted on microscope. With these data, the accumulated distribution of the droplet medium volumes and sizes are evaluated. The graphics obtained by plotting this distributions allow to obtain the VMD and NMD using linear interpolation, seen that on the middle of the distributions the shape of the curves are linear. These values are essential to evaluate the uniformity of droplets and to estimate the volume deposited on the observed paper by the density (droplets/cm2). This methodology to estimate the droplets volume was developed by 11.0.94.224 Project of the CNPMA/EMBRAPA. Observed data of herbicides aerial spraying samples, realized by Project on Pelotas/RS county, were used to compare values obtained manual graphic method and with those obtained by model has shown, with great precision, the values of VMD and NMD on each sampled collector, allowing to estimate a quantities of deposited product and, by consequence, the quantities losses by drifty. The graphics of variability of VMD and NMD showed that the quantity of droplets that reachs the collectors had a short dispersion, while the deposited volume shows a great interval of variation, probably because the strong action of air turbulence on the droplets distribution, enfasizing the necessity of a deeper study to verify this influences on drift.