940 resultados para SINGLE-SOURCE PRECURSORS
Resumo:
We develop a process-based model for the dispersion of a passive scalar in the turbulent flow around the buildings of a city centre. The street network model is based on dividing the airspace of the streets and intersections into boxes, within which the turbulence renders the air well mixed. Mean flow advection through the network of street and intersection boxes then mediates further lateral dispersion. At the same time turbulent mixing in the vertical detrains scalar from the streets and intersections into the turbulent boundary layer above the buildings. When the geometry is regular, the street network model has an analytical solution that describes the variation in concentration in a near-field downwind of a single source, where the majority of scalar lies below roof level. The power of the analytical solution is that it demonstrates how the concentration is determined by only three parameters. The plume direction parameter describes the branching of scalar at the street intersections and hence determines the direction of the plume centreline, which may be very different from the above-roof wind direction. The transmission parameter determines the distance travelled before the majority of scalar is detrained into the atmospheric boundary layer above roof level and conventional atmospheric turbulence takes over as the dominant mixing process. Finally, a normalised source strength multiplies this pattern of concentration. This analytical solution converges to a Gaussian plume after a large number of intersections have been traversed, providing theoretical justification for previous studies that have developed empirical fits to Gaussian plume models. The analytical solution is shown to compare well with very high-resolution simulations and with wind tunnel experiments, although re-entrainment of scalar previously detrained into the boundary layer above roofs, which is not accounted for in the analytical solution, is shown to become an important process further downwind from the source.
Resumo:
In spite of numerous, substantial advances in equine reproduction, many stages of embryonic and fetal morphological development are poorly understood, with no apparent single source of comprehensive information. Hence, the objective of the present study was to provide a complete macroscopic and microscopic description of the equine embryo/fetus at various gestational ages. Thirty-four embryos/fetuses were aged based on their crown rump length (CRL), and submitted to macroscopic description, biometry, light and scanning microscopy, as well as the alizarin technique. All observed developmental changes were chronologically ordered and described. As examples of the main observed features, an accentuated cervical curvature was observed upon macroscopic examination in all specimens. In the nervous system, the encephalic fourth ventricle and the encephalic vesicles forebrain, midbrain, and hindbrain, were visualized from Day 19 (ovulation = Day 0). The thoracic and pelvic limbs were also visualized; their extremities gave rise to the hoof during development from Day 27. Development of other structures such as pigmented optical vesicle, liver, tail, cardiac area, lungs, and dermal vascularization started on Days 25, 25, 19, 19, 34, and 35, respectively. Light and scanning microscopy facilitated detailed examinations of several organs, e.g., heart, kidneys, lungs, and intestine, whereas the alizarin technique enabled visualization of ossification. Observations in this study contributed to the knowledge regarding equine embryogenesis, and included much detailed data from many specimens collected over a long developmental interval. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
The interdisciplinary nature of Astronomy makes it a field of great potential to explore various scientific concepts. However, studies show a great lack of understanding of fundamental subjects, including models that explain phenomena that mark everyday life, like the phases of the moon. Particularly in the context of distance education, learning of such models can be favored by the use of technologies of information and communication. Among other possibilities, we highlight the importance of digital materials that motivate and expand the forms of representation available about phenomena and models. It is also important, however, that these materials promote the explicitation of student's conceptions, as well as interaction with the most central aspects of the astronomical model for the phenomenon. In this dissertation we present a hypermedia module aimed at learning about the phases of the moon, drawn from an investigation on the difficulties with the subject during an Astronomy course for teaching training at undergraduate level at UFRN. The tests of three semesters of course were analyzed, taking into account also the alternative conceptions reported in the literature in astronomy education. The product makes use of small texts, questions, images and interactive animations. Emphasizes questions about the illumination of the Moon and other bodies, and their relationship to the sun, the perception from different angles of objects illuminated by a single source, the cause of the alternation between day and night, the identification of Moon's orbit around the Earth and the occurrence of the phases as a result of the position of observing it, and the perception of time involved in the phenomenon. The module incorporated considerations obtained from interviews with students in two poles where its given presential support for students of the course, and subjects from different pedagogical contexts. The final form of the material was used in a real situation of learning, as supplementary material for the final test of the discipline. The material was analyzed by 7 students and 4 tutors, among 56 users, in the period in question. Most students considered that the so called "Lunar Module" made a difference in their learning, the animations were considered the most prominent aspect, the images were indicated as stimulating and enlightening, and the text informative and enjoyable. The analysis of learning of these students, observing their responses to issues raised at the last evaluation, suggested gains in key aspects relating to the understanding of the phases, but also indicates more persistent difficulties. The work leads us to conclude that it is important to seek contributions for the training of science teachers making use of new technologies, with attention to the treatment of computer as a complementary resource. The interviews that preceded the use of the module, and the way student has sought the module if with questions and/or previous conflicts - established great difference in the effective contribution of the material, indicating that it should be used with the mediation of teacher or tutor, or via strategies that cause interactions between students. It is desirable that these interactions are associated with the recovery of memories of the subjects about previous observations and models, as well as the stimulus to new observations of phenomena
Resumo:
Includes bibliography
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Neste trabalho é apresentada uma nova técnica para a realização do empilhamento sísmico, aplicada ao problema do imageamento de refletores fixos em um meio bidimensional, suavemente heterogêneo, isotrópico, a partir de dados de reflexão. Esta nova técnica chamada de imageamento homeomórfico tem como base a aproximação geométrica do raio e propriedades topológicas dos refletores. São utilizados, portanto, os conceitos de frente de onda, ângulo de incidência, raio de curvatura da frente de onda, cáustica e definição da trajetória do raio; de tal modo que a imagem obtida mantém relações de homeomorfismo com o objeto que se deseja imagear. O empilhamento sísmico é feito, nesta nova técnica de imageamento, aplicando-se uma correção local do tempo, ∆ t, ao tempo de trânsito, t, do raio que parte da fonte sísmica localizada em xo, reflete-se em um ponto de reflexão, Co, sendo registrado como uma reflexão primária em um geofone localizado em xg, em relação ao tempo de referência to no sismograma, correspondente ao tempo de trânsito de um raio central. A fórmula utilizada nesta correção temporal tem como parâmetros o raio de curvatura Ro, o ângulo de emergência βo da frente de onda, no instante em que a mesma atinge a superfície de observação, e a velocidade vo considerada constante nas proximidades da linha sísmica. Considerando-se uma aproximação geométrica seguido um círculo para a frente de onda, pode-se estabelecer diferentes métodos de imageamento homeomórfico dependendo da configuração de processamento. Sendo assim tem-se: 1) Método Elemento de Fonte (Receptor) Comum (EF(R)C). Utiliza-se uma configuração onde se tem um conjunto de sismogramas relacionado com uma única fonte (receptor), e considera-se uma frente de onda real (de reflexão); 2) Método Elemento de Reflexão Comum (ERC). Utiliza-se uma configuração onde um conjunto de sismogramas é relacionado com um único ponto de reflexão, e considera-se uma frente de onda hipoteticamente originada neste ponto; 3) Método Elemento de Evoluta Comum (EEC). Utiliza-se uma configuração onde cada sismograma está relacionado com um par de fonte e geofone coincidentemente posicionados na linha sísmica, e considera-se uma frente de onda hipoteticamente originada no centro de curvatura do refletor. Em cada um desses métodos tem-se como resultados uma seção sísmica empilhada, u(xo, to); e outras duas seções denominadas de radiusgrama, Ro (xo, to), e angulograma, βo(xo, to), onde estão os valores de raios de curvatura e ângulos de emergência da frente de onda considerada no instante em que a mesma atinge a superfície de observação, respectivamente. No caso do método denominado elemento refletor comum (ERC), a seção sísmica resultante do empilhamento corresponde a seção afastamento nulo. Pode-se mostrar que o sinal sísmico não sofre efeitos de alongamento como consequência da correção temporal, nem tão pouco apresenta problemas de dispersão de pontos de reflexão como consequência da inclinação do refletor, ao contrário do que acontece com as técnicas de empilhamento que tem por base a correção NMO. Além disto, por não necessitar de um macro modelo de velocidades a técnica de imageamento homeomórfico, de um modo geral, pode também ser aplicada a modelos heterogêneos, sem perder o rigor em sua formulação. Aqui também são apresentados exemplos de aplicação dos métodos elemento de fonte comum (EFC) (KEYDAR, 1993), e elemento refletor comum (ERC) (STEENTOFT, 1993), ambos os casos com dados sintéticos. No primeiro caso, (EFC), onde o empilhamento é feito tendo como referência um raio central arbitrário, pode-se observar um alto nível de exatidão no imageamento obtido, além do que é dada uma interpretação para as seções de radiusgrama e angulograma, de modo a se caracterizar aspectos geométricos do model geofísico em questão. No segundo caso, (ERC), o método é aplicado a série de dados Marmousi, gerados pelo método das diferenças finitas, e o resultado é comparado com aquele obtido por métodos convecionais (NMO/DMO) aplicados aos mesmos dados. Como consequência, observa-se que através do método ERC pode-se melhor detectar a continuidade de refletores, enquanto que através dos métodos convencionais caracterizam-se melhor a ocorrência de difrações. Por sua vez, as seções de radiusgrama e angulograma, no método (ERC), apresentam um baixo poder de resolução nas regiões do modelo onde se tem um alto grau de complexidade das estruturas. Finalmente, apresenta-se uma formulação unificada que abrange os diferentes métodos de imageamento homeomórfico citados anteriormente, e também situações mais gerais onde a frente de onda não se aproxima a um círculo, mas a uma curva quadrática qualquer.
Resumo:
At the beginning of the 21st century, several crises are intertwined and the environmental crisis is the most global of them all. For a complex and global crisis, which has implications of a social, economic, technological nature, etc., solutions will probably not come from a single source, but rather the sum of efforts of society as a whole (including all its instances, government, business, population, etc.). The objective of this review was to discuss the need to produce “environmental” professionals who, through their activities, are in some way involved with the quality of the environment. We believe that, ultimately, it is the quality of the environment that will ensure the quality of life in a fairer society. Thinking about the training of teachers, as environmental educators in undergraduate (bachelor) universities, means having as a reference the idea of wholeness (environmental, political, educational, social, scientific, etc.) in the diversity that these areas possess. Even though there are many difficulties involved in getting this topic included in the university structure, based on a complex approach, we believe this is one of the best paths toward environmental training.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The energy harvesting research field has grown considerably in the last decade due to increasing interests in energy autonomous sensing systems, which require smart and efficient interfaces for extracting power from energy source and power management (PM) circuits. This thesis investigates the design trade-offs for minimizing the intrinsic power of PM circuits, in order to allow operation with very weak energy sources. For validation purposes, three different integrated power converter and PM circuits for energy harvesting applications are presented. They have been designed for nano-power operations and single-source converters can operate with input power lower than 1 μW. The first IC is a buck-boost converter for piezoelectric transducers (PZ) implementing Synchronous Electrical Charge Extraction (SECE), a non-linear energy extraction technique. Moreover, Residual Charge Inversion technique is exploited for extracting energy from PZ with weak and irregular excitations (i.e. lower voltage), and the implemented PM policy, named Two-Way Energy Storage, considerably reduces the start-up time of the converter, improving the overall conversion efficiency. The second proposed IC is a general-purpose buck-boost converter for low-voltage DC energy sources, up to 2.5 V. An ultra-low-power MPPT circuit has been designed in order to track variations of source power. Furthermore, a capacitive boost circuit has been included, allowing the converter start-up from a source voltage VDC0 = 223 mV. A nano-power programmable linear regulator is also included in order to provide a stable voltage to the load. The third IC implements an heterogeneous multisource buck-boost converter. It provides up to 9 independent input channels, of which 5 are specific for PZ (with SECE) and 4 for DC energy sources with MPPT. The inductor is shared among channels and an arbiter, designed with asynchronous logic to reduce the energy consumption, avoids simultaneous access to the buck-boost core, with a dynamic schedule based on source priority.
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.
Resumo:
The identification of associations between interleukin-28B (IL-28B) variants and the spontaneous clearance of hepatitis C virus (HCV) raises the issues of causality and the net contribution of host genetics to the trait. To estimate more precisely the net effect of IL-28B genetic variation on HCV clearance, we optimized genotyping and compared the host contributions in multiple- and single-source cohorts to control for viral and demographic effects. The analysis included individuals with chronic or spontaneously cleared HCV infections from a multiple-source cohort (n = 389) and a single-source cohort (n = 71). We performed detailed genotyping in the coding region of IL-28B and searched for copy number variations to identify the genetic variant or haplotype carrying the strongest association with viral clearance. This analysis was used to compare the effects of IL-28B variation in the two cohorts. Haplotypes characterized by carriage of the major alleles at IL-28B single-nucleotide polymorphisms (SNPs) were highly overrepresented in individuals with spontaneous clearance versus those with chronic HCV infections (66.1% versus 38.6%, P = 6 × 10(-9) ). The odds ratios for clearance were 2.1 [95% confidence interval (CI) = 1.6-3.0] and 3.9 (95% CI = 1.5-10.2) in the multiple- and single-source cohorts, respectively. Protective haplotypes were in perfect linkage (r(2) = 1.0) with a nonsynonymous coding variant (rs8103142). Copy number variants were not detected. CONCLUSION: We identified IL-28B haplotypes highly predictive of spontaneous HCV clearance. The high linkage disequilibrium between IL-28B SNPs indicates that association studies need to be complemented by functional experiments to identify single causal variants. The point estimate for the genetic effect was higher in the single-source cohort, which was used to effectively control for viral diversity, sex, and coinfections and, therefore, offered a precise estimate of the net host genetic contribution.
Resumo:
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Resumo:
The general model The aim of this chapter is to introduce a structured overview of the different possibilities available to display and analyze brain electric scalp potentials. First, a general formal model of time-varying distributed EEG potentials is introduced. Based on this model, the most common analysis strategies used in EEG research are introduced and discussed as specific cases of this general model. Both the general model and particular methods are also expressed in mathematical terms. It is however not necessary to understand these terms to understand the chapter. The general model that we propose here is based on the statement made in Chapter 3, stating that the electric field produced by active neurons in the brain propagates in brain tissue without delay in time. Contrary to other imaging methods that are based on hemodynamic or metabolic processes, the EEG scalp potentials are thus “real-time,” not delayed and not a-priori frequency-filtered measurements. If only a single dipolar source in the brain were active, the temporal dynamics of the activity of that source would be exactly reproduced by the temporal dynamics observed in the scalp potentials produced by that source. This is illustrated in Figure 5.1, where the expected EEG signal of a single source with spindle-like dynamics in time has been computed. The dynamics of the scalp potentials exactly reproduce the dynamics of the source. The amplitude of the measured potentials depends on the relation between the location and orientation of the active source, its strength and the electrode position.
Resumo:
Das heutige Leben der Menschen ist vom Internet durchdrungen, kaum etwas ist nicht „vernetzt“ oder „elektronisch verfügbar“. Die Welt befindet sich im Wandel, die „Informationsgesellschaft“ konsumiert in Echtzeit Informationen auf mobilen Endgeräten, unabhängig von Zeit und Ort. Dies gilt teilweise auch für den Aus- und Weiterbildungssektor: Unter „E-Learning“ versteht man die elektronische Unterstützung des Lernens. Gelernt wird „online“; Inhalte sind digital verfügbar. Zudem hat sich die Lebenssituation der sogenannten „Digital Natives“, der jungen Individuen in der Informationsgesellschaft, verändert. Sie fordern zeitlich und räumlich flexible Ausbildungssysteme, erwarten von Bildungsinstitutionen umfassende digitale Verfügbarkeit von Informationen und möchten ihr Leben nicht mehr Lehr- und Zeitplänen unterordnen – das Lernen soll zum eigenen Leben passen, lebensbegleitend stattfinden. Neue „Lernszenarien“, z.B. für alleinerziehende Teilzeitstudierende oder Berufstätige, sollen problemlos möglich werden. Dies soll ein von der europäischen Union erarbeitetes Paradigma leisten, das unter dem Terminus „Lebenslanges Lernen“ zusammengefasst ist. Sowohl E-Learning, als auch Lebenslanges Lernen gewinnen an Bedeutung, denn die (deutsche) Wirtschaft thematisiert den „Fachkräftemangel“. Die Nachfrage nach speziell ausgebildeten Ingenieuren im MINT-Bereich soll schnellstmöglich befriedigt, die „Mitarbeiterlücke“ geschlossen werden, um so weiterhin das Wachstum und den Wohlstand zu sichern. Spezielle E-Learning-Lösungen für den MINT-Bereich haben das Potential, eine schnelle sowie flexible Aus- und Weiterbildung für Ingenieure zu bieten, in der Fachwissen bezogen auf konkrete Anforderungen der Industrie vermittelt wird. Momentan gibt es solche Systeme allerdings noch nicht. Wie sehen die Anforderungen im MINT-Bereich an eine solche E-Learning-Anwendung aus? Sie muss neben neuen Technologien vor allem den funktionalen Anforderungen des MINTBereichs, den verschiedenen Zielgruppen (wie z.B. Bildungsinstitutionen, Lerner oder „Digital Natives“, Industrie) und dem Paradigma des Lebenslangen Lernens gerecht werden, d.h. technische und konzeptuelle Anforderungen zusammenführen. Vor diesem Hintergrund legt die vorliegende Arbeit ein Rahmenwerk für die Erstellung einer solchen Lösung vor. Die praktischen Ergebnisse beruhen auf dem Blended E-Learning-System des Projekts „Technische Informatik Online“ (VHN-TIO).