956 resultados para Physics, Applied
Resumo:
The FENE-CR model is investigated through a numerical algorithm to simulate the time-dependent moving free surface flow produced by a jet impinging on a flat surface. The objective is to demonstrate that by increasing the extensibility parameter L, the numerical solutions converge to the solutions obtained with the Oldroyd-B model. The governing equations are solved by an established free surface flow solver based on the finite difference and marker-and-cell methods. Numerical predictions of the extensional viscosity obtained with several values of the parameter L are presented. The results show that if the extensibility parameter L is sufficiently large then the extensional viscosities obtained with the FENE-CR model approximate the corresponding Oldroyd-B viscosity. Moreover, the flow from a jet impinging on a flat surface is simulated with various values of the extensibility parameter L and the fluid flow visualizations display convergence to the Oldroyd-B jet flow results.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A robotic control design considering all the inherent nonlinearities of the robot-engine configuration is developed. The interactions between the robot and joint motor drive mechanism are considered. The proposed control combines two strategies, one feedforward control in order to maintain the system in the desired coordinate, and feedback control system to take the system into a desired coordinate. The feedback control is obtained using State-Dependent Riccati Equation (SDRE). For link positioning two cases are considered. Case I: For control positioning, it is only used motor voltage; Case II: For control positioning, it is used both motor voltage and torque between the links. Simulation results, including parametric uncertainties in control shows the feasibility of the proposed control for the considered system.
Resumo:
We report here part of a research project developed by the Science Education Research Group, titled: "Teachers’ Pedagogical Practices and formative processes in Science and Mathematics Education" which main goal is the development of coordinated research that can generate a set of subsidies for a reflection on the processes of teacher training in Sciences and Mathematics Education. One of the objectives was to develop continuing education activities with Physics teachers, using the History and Philosophy of Science as conductors of the discussions and focus of teaching experiences carried out by them in the classroom. From data collected through a survey among local Science, Physics, Chemistry, Biology and Mathematics teachers in Bauru, a São Paulo State city, we developed a continuing education proposal titled “The History and Philosophy of Science in the Physics teachers’ pedagogical practice”, lasting 40 hours of lessons. We followed the performance of five teachers who participated in activities during the 2008 first semester and were teaching Physics at High School level. They designed proposals for short courses, taking into consideration aspects of History and Philosophy of Science and students’ alternative conceptions. Short courses were applied in real classrooms situations and accompanied by reflection meetings. This is a qualitative research, and treatment of data collected was based on content analysis, according to Bardin [1].
Resumo:
This work proposes the development and study of a novel technique lot the generation of fractal descriptors used in texture analysis. The novel descriptors are obtained from a multiscale transform applied to the Fourier technique of fractal dimension calculus. The power spectrum of the Fourier transform of the image is plotted against the frequency in a log-log scale and a multiscale transform is applied to this curve. The obtained values are taken as the fractal descriptors of the image. The validation of the proposal is performed by the use of the descriptors for the classification of a dataset of texture images whose real classes are previously known. The classification precision is compared to other fractal descriptors known in the literature. The results confirm the efficiency of the proposed method. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This is a research paper in which we discuss “active learning” in the light of Cultural-Historical Activity Theory (CHAT), a powerful framework to analyze human activity, including teaching and learning process and the relations between education and wider human dimensions as politics, development, emancipation etc. This framework has its origin in Vygotsky's works in the psychology, supported by a Marxist perspective, but nowadays is a interdisciplinary field encompassing History, Anthropology, Psychology, Education for example.
Resumo:
Doppler Broadening Spectroscopy of the large Ge crystal of an HPGe detector was perforrned using positrons from pair production of 6.13 MeV ϒ-rays from the 19F(p,αϒ) 16O reaction. Two HPGe detectors facing opposite sides of the Ge crystal acting as target provided both coincidence and singles spectra. Changes in the shape of the annihilation peak were observed when the high voltage applied to the target detector was switched on or off, amounting to somewhat less than 20% when the areas of equivalent energy intervals in the corresponding normalized spectra are compared.
Resumo:
There are few studies using cluster models in the nuclei around the intermediate and heavy mass regions. The alpha-cluster model is based on the interaction between an alpha particle and a nucleus chosen as a core. This model can be applied to nuclear system where alpha-cluster stability is expected and the application in light and intermediate mass nuclei was able to give consistent descriptions of experimental data.
Resumo:
A sample scanning confocal optical microscope (SCOM) was designed and constructed in order to perform local measurements of fluorescence, light scattering and Raman scattering. This instrument allows to measure time resolved fluorescence, Raman scattering and light scattering from the same diffraction limited spot. Fluorescence from single molecules and light scattering from metallic nanoparticles can be studied. First, the electric field distribution in the focus of the SCOM was modelled. This enables the design of illumination modes for different purposes, such as the determination of the three-dimensional orientation of single chromophores. Second, a method for the calculation of the de-excitation rates of a chromophore was presented. This permits to compare different detection schemes and experimental geometries in order to optimize the collection of fluorescence photons. Both methods were combined to calculate the SCOM fluorescence signal of a chromophore in a general layered system. The fluorescence excitation and emission of single molecules through a thin gold film was investigated experimentally and modelled. It was demonstrated that, due to the mediation of surface plasmons, single molecule fluorescence near a thin gold film can be excited and detected with an epi-illumination scheme through the film. Single molecule fluorescence as close as 15nm to the gold film was studied in this manner. The fluorescence dynamics (fluorescence blinking and excited state lifetime) of single molecules was studied in the presence and in the absence of a nearby gold film in order to investigate the influence of the metal on the electronic transition rates. The trace-histogram and the autocorrelation methods for the analysis of single molecule fluorescence blinking were presented and compared via the analysis of Monte-Carlo simulated data. The nearby gold influences the total decay rate in agreement to theory. The gold presence produced no influence on the ISC rate from the excited state to the triplet but increased by a factor of 2 the transition rate from the triplet to the singlet ground state. The photoluminescence blinking of Zn0.42Cd0.58Se QDs on glass and ITO substrates was investigated experimentally as a function of the excitation power (P) and modelled via Monte-Carlo simulations. At low P, it was observed that the probability of a certain on- or off-time follows a negative power-law with exponent near to 1.6. As P increased, the on-time fraction reduced on both substrates whereas the off-times did not change. A weak residual memory effect between consecutive on-times and consecutive off-times was observed but not between an on-time and the adjacent off-time. All of this suggests the presence of two independent mechanisms governing the lifetimes of the on- and off-states. The simulated data showed Poisson-distributed off- and on-intensities, demonstrating that the observed non-Poissonian on-intensity distribution of the QDs is not a product of the underlying power-law probability and that the blinking of QDs occurs between a non-emitting off-state and a distribution of emitting on-states with different intensities. All the experimentally observed photo-induced effects could be accounted for by introducing a characteristic lifetime tPI of the on-state in the simulations. The QDs on glass presented a tPI proportional to P-1 suggesting the presence of a one-photon process. Light scattering images and spectra of colloidal and C-shaped gold nano-particles were acquired. The minimum size of a metallic scatterer detectable with the SCOM lies around 20 nm.
Resumo:
A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
In dieser Arbeit stelle ich Aspekte zu QCD Berechnungen vor, welche eng verknüpft sind mit der numerischen Auswertung von NLO QCD Amplituden, speziell der entsprechenden Einschleifenbeiträge, und der effizienten Berechnung von damit verbundenen Beschleunigerobservablen. Zwei Themen haben sich in der vorliegenden Arbeit dabei herauskristallisiert, welche den Hauptteil der Arbeit konstituieren. Ein großer Teil konzentriert sich dabei auf das gruppentheoretische Verhalten von Einschleifenamplituden in QCD, um einen Weg zu finden die assoziierten Farbfreiheitsgrade korrekt und effizient zu behandeln. Zu diesem Zweck wird eine neue Herangehensweise eingeführt welche benutzt werden kann, um farbgeordnete Einschleifenpartialamplituden mit mehreren Quark-Antiquark Paaren durch Shufflesummation über zyklisch geordnete primitive Einschleifenamplituden auszudrücken. Ein zweiter großer Teil konzentriert sich auf die lokale Subtraktion von zu Divergenzen führenden Poltermen in primitiven Einschleifenamplituden. Hierbei wurde im Speziellen eine Methode entwickelt, um die primitiven Einchleifenamplituden lokal zu renormieren, welche lokale UV Counterterme und effiziente rekursive Routinen benutzt. Zusammen mit geeigneten lokalen soften und kollinearen Subtraktionstermen wird die Subtraktionsmethode dadurch auf den virtuellen Teil in der Berechnung von NLO Observablen erweitert, was die voll numerische Auswertung der Einschleifenintegrale in den virtuellen Beiträgen der NLO Observablen ermöglicht. Die Methode wurde schließlich erfolgreich auf die Berechnung von NLO Jetraten in Elektron-Positron Annihilation im farbführenden Limes angewandt.
Resumo:
A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.
Resumo:
Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.