915 resultados para high-order harmonic generation
Resumo:
The ability to represent time is an essential component of cognition but its neural basis is unknown. Although extensively studied both behaviorally and electrophysiologically, a general theoretical framework describing the elementary neural mechanisms used by the brain to learn temporal representations is lacking. It is commonly believed that the underlying cellular mechanisms reside in high order cortical regions but recent studies show sustained neural activity in primary sensory cortices that can represent the timing of expected reward. Here, we show that local cortical networks can learn temporal representations through a simple framework predicated on reward dependent expression of synaptic plasticity. We assert that temporal representations are stored in the lateral synaptic connections between neurons and demonstrate that reward-modulated plasticity is sufficient to learn these representations. We implement our model numerically to explain reward-time learning in the primary visual cortex (V1), demonstrate experimental support, and suggest additional experimentally verifiable predictions.
Resumo:
This article centers on the computational performance of the continuous and discontinuous Galerkin time stepping schemes for general first-order initial value problems in R n , with continuous nonlinearities. We briefly review a recent existence result for discrete solutions from [6], and provide a numerical comparison of the two time discretization methods.
Resumo:
Trypanosomes show an intriguing organization of their mitochondrial DNA into a catenated network, the kinetoplast DNA (kDNA). While more than 30 proteins involved in kDNA replication have been described, only few components of kDNA segregation machinery are currently known. Electron microscopy studies identified a high-order structure, the tripartite attachment complex (TAC), linking the basal body of the flagellum via the mitochondrial membranes to the kDNA. Here we describe TAC102, a novel core component of the TAC, which is essential for proper kDNA segregation during cell division. Loss of TAC102 leads to mitochondrial genome missegregation but has no impact on proper organelle biogenesis and segregation. The protein is present throughout the cell cycle and is assembled into the newly developing TAC only after the pro-basal body has matured indicating a hierarchy in the assembly process. Furthermore, we provide evidence that the TAC is replicated de novo rather than using a semi-conservative mechanism. Lastly, we demonstrate that TAC102 lacks an N-terminal mitochondrial targeting sequence and requires sequences in the C-terminal part of the protein for its proper localization.
Resumo:
Detailed paleomagnetic investigations are reported for 283 specimens, sampled from three closely spaced Ocean Drilling Program Leg 135 cores from the Lau Basin. These specimens cover three rather similar records of the reversed Cobb Mountain short polarity event, having an age of about 1.12 m.y. On the basis of a very detailed subsampling every 0.6 cm, we found that the transition times for the Cobb Mountain geomagnetic polarity event, as seen in the three Lau Basin sediment records, appear to have been as short as 0.6-1.0 k.y., although the duration of the normal-polarity event itself lasted only about 17 ± 4 k.y. The older (R to N) transition as well as the younger (N to R) transition show virtual geomagnetic paths roughly along the Americas, but shifted some 30° ± 10° to the east. These paths conflict with Cobb Mountain transition paths recorded in sediments from the Labrador Sea and the North Atlantic, but they are in fair accordance with sediment records from the Celebes and Sulu seas when corrected for differences in site longitude, suggesting that the transitional fields are dominated by nonaxial, high-order spherical harmonics.
Resumo:
A composite strontium isotopic seawater curve was constructed for the Miocene between 24 and 6 Ma by combining 87Sr/86Sr measurements of planktonic foraminifera from Deep Sea Drilling Project sites 289 and 588. Site 289, with its virtually continuous sedimentary record and high sedimentation rates (26 m/m.y.), was used for constructing the Oligocene to mid-Miocene part of the record, which included the calibration of 63 biostratigraphic datums to the Sr seawater curve using the timescale of Cande and Kent (1992 doi:10.1029/92JB01202). Across the Oligocene/Miocene boundary, a brief plateau occurred in the Sr seawater curve (87Sr/86Sr values averaged 0.70824) which is coincident with a carbon isotopic maximum (CM-O/M) from 24.3 to 22.6 Ma. During the early Miocene, the strontium isotopic curve was marked by a steep rise in 87Sr/86Sr that included a break in slope near 19 Ma. The rate of growth was about 60 ppm/m.y. between 22.5 and 19.0 Ma and increased to over 80 ppm/m.y. between 19.0 and 16 Ma. Beginning at ~16 Ma (between carbon isotopic maxima CM3 and CM4 of Woodruff and Savin (1991 doi:10.1029/91PA02561)), the rate of 87Sr/86Sr growth slowed and 87Sr/86Sr values were near constant from 15 to 13 Ma. After 13 Ma, growth in 87Sr/86Sr resumed and continued until ~9 Ma, when the rate of 87Sr/86Sr growth decreased to zero once again. The entire Miocene seawater curve can be described by a high-order function, and the first derivative (d87Sr/86Sr/dt) of this function reveals two periods of increased slope. The greatest rate of 87Sr/86Sr change occurred during the early Miocene between ~20 and 16 Ma, and a smaller, but distinct, period of increased slope also occurred during the late Miocene between ~12 and 9 Ma. These periods of steepened slope coincide with major phases of uplift and denudation of the Himalayan-Tibetan Plateau region, supporting previous interpretations that the primary control on seawater 87Sr/86Sr during the Miocene was related to the collision of India and Asia. The rapid increase in 87Sr/86Sr values during the early Miocene from 20 to 16 Ma imply high rates of chemical weathering and dissolved riverine fluxes to the oceans. In the absence of another source of CO2, these high rates of chemical weathering should have quickly resulted in a drawdown of atmospheric CO2 and climatic cooling through a reversed greenhouse effect. The paleoclimatic record, however, indicates a warming trend during the early Miocene, culminating in a climatic optimum between 17 and 14.5 Ma. We suggest that the high rates of chemical erosion and warm temperatures during the climatic optimum were caused by an increase in the contribution of volcanic CO2 from the eruption of the Columbia River Flood Basalts (CRFB) between 17 and 15 Ma. The decrease in the rate of CRFB eruptions at 15 Ma and the removal of atmospheric carbon dioxide by increased organic carbon burial in Monterey deposits eventually led to cooling and increased glaciation between ~14.5 and 13 Ma. The CRFB hypothesis helps to explain the significant time lag between the onset of increased rates of organic carbon burial in the Monterey at 17.5 Ma (as marked by increased delta13C values) and the climatic cooling and glaciation during the middle Miocene (as marked by the increase in delta18O values), which did not begin until ~14.5 Ma.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
The reconstruction of the cell lineage tree of early zebrafish embryogenesis requires the use of in-vivo microscopy imaging and image processing strategies. Second (SHG) and third harmonic generation (THG) microscopy observations in unstained zebrafish embryos allows to detect cell divisions and cell membranes from 1-cell to 1K-cell stage. In this article, we present an ad-hoc image processing pipeline for cell tracking and cell membranes segmentation enabling the reconstruction of the early zebrafish cell lineage tree until the 1K-cell stage. This methodology has been used to obtain digital zebrafish embryos allowing to generate a quantitative description of early zebrafish embryogenesis with minute temporal accuracy and μm spatial resolution
Resumo:
III-nitride nanorods have attracted much scientific interest during the last decade because of their unique optical and electrical properties [1,2]. The high crystal quality and the absence of extended defects make them ideal candidates for the fabrication of high efficiency opto-electronic devices such as nano-photodetectors, light-emitting diodes, and solar cells [1-3]. Nitride nanorods are commonly grown in the self-assembled mode by plasma-assisted molecular beam epitaxy (MBE) [4]. However, self-assembled nanorods are characterized by inhomogeneous heights and diameters, which render the device processing very difficult and negatively affect the electronic transport properties of the final device. For this reason, the selective area growth (SAG) mode has been proposed, where the nanorods preferentially grow with high order on pre-defined sites on a pre-patterned substrate
Resumo:
The stability analysis of open cavity flows is a problem of great interest in the aeronautical industry. This type of flow can appear, for example, in landing gears or auxiliary power unit configurations. Open cavity flows is very sensitive to any change in the configuration, either physical (incoming boundary layer, Reynolds or Mach numbers) or geometrical (length to depth and length to width ratio). In this work, we have focused on the effect of geometry and of the Reynolds number on the stability properties of a threedimensional spanwise periodic cavity flow in the incompressible limit. To that end, BiGlobal analysis is used to investigate the instabilities in this configuration. The basic flow is obtained by the numerical integration of the Navier-Stokes equations with laminar boundary layers imposed upstream. The 3D perturbation, assumed to be periodic in the spanwise direction, is obtained as the solution of the global eigenvalue problem. A parametric study has been performed, analyzing the stability of the flow under variation of the Reynolds number, the L/D ratio of the cavity, and the spanwise wavenumber β. For consistency, multidomain high order numerical schemes have been used in all the computations, either basic flow or eigenvalue problems. The results allow to define the neutral curves in the range of L/D = 1 to L/D = 3. A scaling relating the frequency of the eigenmodes and the length to depth ratio is provided, based on the analysis results.
Resumo:
The present contribution discusses the development of a PSE-3D instability analysis algorithm, in which a matrix forming and storing approach is followed. Alternatively to the typically used in stability calculations spectral methods, new stable high-order finitedifference-based numerical schemes for spatial discretization 1 are employed. Attention is paid to the issue of efficiency, which is critical for the success of the overall algorithm. To this end, use is made of a parallelizable sparse matrix linear algebra package which takes advantage of the sparsity offered by the finite-difference scheme and, as expected, is shown to perform substantially more efficiently than when spectral collocation methods are used. The building blocks of the algorithm have been implemented and extensively validated, focusing on classic PSE analysis of instability on the flow-plate boundary layer, temporal and spatial BiGlobal EVP solutions (the latter necessary for the initialization of the PSE-3D), as well as standard PSE in a cylindrical coordinates using the nonparallel Batchelor vortex basic flow model, such that comparisons between PSE and PSE-3D be possible; excellent agreement is shown in all aforementioned comparisons. Finally, the linear PSE-3D instability analysis is applied to a fully three-dimensional flow composed of a counter-rotating pair of nonparallel Batchelor vortices.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes
Resumo:
Global linear instability theory is concerned with the temporal or spatial development of small-amplitude perturbations superposed upon laminar steady or time-periodic threedimensional flows, which are inhomogeneous in two (and periodic in one) or all three spatial directions.1 The theory addresses flows developing in complex geometries, in which the parallel or weakly nonparallel basic flow approximation invoked by classic linear stability theory does not hold. As such, global linear theory is called to fill the gap in research into stability and transition in flows over or through complex geometries. Historically, global linear instability has been (and still is) concerned with solution of multi-dimensional eigenvalue problems; the maturing of non-modal linear instability ideas in simple parallel flows during the last decade of last century2–4 has given rise to investigation of transient growth scenarios in an ever increasing variety of complex flows. After a brief exposition of the theory, connections are sought with established approaches for structure identification in flows, such as the proper orthogonal decomposition and topology theory in the laminar regime and the open areas for future research, mainly concerning turbulent and three-dimensional flows, are highlighted. Recent results obtained in our group are reported in both the time-stepping and the matrix-forming approaches to global linear theory. In the first context, progress has been made in implementing a Jacobian-Free Newton Krylov method into a standard finite-volume aerodynamic code, such that global linear instability results may now be obtained in compressible flows of aeronautical interest. In the second context a new stable very high-order finite difference method is implemented for the spatial discretization of the operators describing the spatial BiGlobal EVP, PSE-3D and the TriGlobal EVP; combined with sparse matrix treatment, all these problems may now be solved on standard desktop computers.
Resumo:
El audio multicanal ha avanzado a pasos agigantados en los últimos años, y no solo en las técnicas de reproducción, sino que en las de capitación también. Por eso en este proyecto se encuentran ambas cosas: un array microfónico, EigenMike32 de MH Acoustics, y un sistema de reproducción con tecnología Wave Field Synthesis, instalado Iosono en la Jade Höchscule Oldenburg. Para enlazar estos dos puntos de la cadena de audio se proponen dos tipos distintos de codificación: la reproducción de la toma horizontal del EigenMike32; y el 3er orden de Ambisonics (High Order Ambisonics, HOA), una técnica de codificación basada en Armónicos Esféricos mediante la cual se simula el campo acústico en vez de simular las distintas fuentes. Ambas se desarrollaron en el entorno Matlab y apoyadas por la colección de scripts de Isophonics llamada Spatial Audio Matlab Toolbox. Para probar éstas se llevaron a cabo una serie de test en los que se las comparó con las grabaciones realizadas a la vez con un Dummy Head, a la que se supone el método más aproximado a nuestro modo de escucha. Estas pruebas incluían otras grabaciones hechas con un Doble MS de Schoeps que se explican en el proyecto “Sally”. La forma de realizar éstas fue, una batería de 4 audios repetida 4 veces para cada una de las situaciones garbadas (una conversación, una clase, una calle y un comedor universitario). Los resultados fueron inesperados, ya que la codificación del tercer orden de HOA quedo por debajo de la valoración Buena, posiblemente debido a la introducción de material hecho para un array tridimensional dentro de uno de 2 dimensiones. Por el otro lado, la codificación que consistía en extraer los micrófonos del plano horizontal se mantuvo en el nivel de Buena en todas las situaciones. Se concluye que HOA debe seguir siendo probado con mayores conocimientos sobre Armónicos Esféricos; mientras que el otro codificador, mucho más sencillo, puede ser usado para situaciones sin mucha complejidad en cuanto a espacialidad. In the last years the multichannel audio has increased in leaps and bounds and not only in the playback techniques, but also in the recording ones. That is the reason of both things being in this project: a microphone array, EigenMike32 from MH Acoustics; and a playback system with Wave Field Synthesis technology, installed by Iosono in Jade Höchscule Oldenburg. To link these two points of the audio chain, 2 different kinds of codification are proposed: the reproduction of the EigenMike32´s horizontal take, and the Ambisonics´ third order (High Order Ambisonics, HOA), a codification technique based in Spherical Harmonics through which the acoustic field is simulated instead of the different sound sources. Both have been developed inside Matlab´s environment and supported by the Isophonics´ scripts collection called Spatial Audio Matlab Toolbox. To test these, a serial of tests were made in which they were compared with recordings made at the time by a Dummy Head, which is supposed to be the closest method to our hearing way. These tests included other recording and codifications made by a Double MS (DMS) from Schoeps which are explained in the project named “3D audio rendering through Ambisonics techniques: from multi-microphone recordings (DMS Schoeps) to a WFS system, through Matlab”. The way to perform the tests was, a collection made of 4 audios repeated 4 times for each recorded situation (a chat, a class, a street and college canteen or Mensa). The results were unexpected, because the HOA´s third order stood under the Well valuation, possibly caused by introducing material made for a tridimensional array inside one made only by 2 dimensions. On the other hand, the codification that consisted of extracting the horizontal plane microphones kept the Well valuation in all the situations. It is concluded that HOA should keep being tested with larger knowledge about Spherical Harmonics; while the other coder, quite simpler, can be used for situations without a lot of complexity with regards to spatiality.
Resumo:
This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.