925 resultados para Two particle distributions
Resumo:
The flux of organic particles below the mixed layer is one major pathway of carbon from the surface into the deep ocean. The magnitude of this export flux depends on two major processes-remineralization rates and sinking velocities. Here, we present an efficient method to measure sinking velocities of particles in the size range from approximately 3-400 µm by means of video microscopy (FlowCAM®). The method allows rapid measurement and automated analysis of mixed samples and was tested with polystyrene beads, different phytoplankton species, and sediment trap material. Sinking velocities of polystyrene beads were close to theoretical values calculated from Stokes' Law. Sinking velocities of the investigated phytoplankton species were in reasonable agreement with published literature values and sinking velocities of material collected in sediment trap increased with particle size. Temperature had a strong effect on sinking velocities due to its influence on seawater viscosity and density. An increase in 9 °C led to a measured increase in sinking velocities of 40 %. According to this temperature effect, an average temperature increase in 2 °C as projected for the sea surface by the end of this century could increase sinking velocities by about 6 % which might have feedbacks on carbon export into the deep ocean.
Resumo:
A Laser In-Situ Scattering Transmissometer (LISST) was used to collect vertical distribution data of particles from 2.5 to 500 µm in size. The LISST uses a multi-ring detector to measure scattering light of particles from a laser diode. Particles are classified into 32 log-spaced bins and the concentration of each bin is calculated as micro-liters per liter (µl/l). The instrument is rated to a depth of 300 m, and also records temperature and pressure. The sample interval was set to record every second. The LISST was attached to the LOPC frame to conduct casts and allow for particle-size comparisons between the two instruments. The LOPC is rated to a depth of 2000 m, thus a short deployment to a depth of 300 m was first conducted with both instruments. The instruments were then returned to the deck and the LISST removed via a quick release bracket so deep LOPC casts could be continued at a station. Raw LISST size-spectrum data is presented as concentrations for each of the 32 size bins for every second of the cast.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods
Resumo:
We propose to study the stability properties of an air flow wake forced by a dielectric barrier discharge (DBD) actuator, which is a type of electrohydrodynamic (EHD) actuator. These actuators add momentum to the flow around a cylinder in regions close to the wall and, in our case, are symmetrically disposed near the boundary layer separation point. Since the forcing frequencies, typical of DBD, are much higher than the natural shedding frequency of the flow, we will be considering the forcing actuation as stationary. In the first part, the flow around a circular cylinder modified by EHD actuators will be experimentally studied by means of particle image velocimetry (PIV). In the second part, the EHD actuators have been numerically implemented as a boundary condition on the cylinder surface. Using this boundary condition, the computationally obtained base flow is then compared with the experimental one in order to relate the control parameters from both methodologies. After validating the obtained agreement, we study the Hopf bifurcation that appears once the flow starts the vortex shedding through experimental and computational approaches. For the base flow derived from experimentally obtained snapshots, we monitor the evolution of the velocity amplitude oscillations. As to the computationally obtained base flow, its stability is analyzed by solving a global eigenvalue problem obtained from the linearized Navier–Stokes equations. Finally, the critical parameters obtained from both approaches are compared.
Resumo:
This paper presents some ideas about a new neural network architecture that can be compared to a Taylor analysis when dealing with patterns. Such architecture is based on lineal activation functions with an axo-axonic architecture. A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties even with lineal activation functions. There exists a clear difference between cooperative and competitive strategies. The former ones are based on the swarm colonies, in which all individuals share its knowledge about the goal in order to pass such information to other individuals to get optimum solution. The latter ones are based on genetic models, that is, individuals can die and new individuals are created combining information of alive one; or are based on molecular/celular behaviour passing information from one structure to another. A swarm-based model is applied to obtain the Neural Network, training the net with a Particle Swarm algorithm.
Resumo:
The energy and specific energy absorbed in the main cell compartments (nucleus and cytoplasm) in typical radiobiology experiments are usually estimated by calculations as they are not accessible for a direct measurement. In most of the work, the cell geometry is modelled using the combination of simple mathematical volumes. We propose a method based on high resolution confocal imaging and ion beam analysis (IBA) in order to import realistic cell nuclei geometries in Monte-Carlo simulations and thus take into account the variety of different geometries encountered in a typical cell population. Seventy-six cell nuclei have been imaged using confocal microscopy and their chemical composition has been measured using IBA. A cellular phantom was created from these data using the ImageJ image analysis software and imported in the Geant4 Monte-Carlo simulation toolkit. Total energy and specific energy distributions in the 76 cell nuclei have been calculated for two types of irradiation protocols: a 3 MeV alpha particle microbeam used for targeted irradiation and a 239Pu alpha source used for large angle random irradiation. Qualitative images of the energy deposited along the particle tracks have been produced and show good agreement with images of DNA double strand break signalling proteins obtained experimentally. The methodology presented in this paper provides microdosimetric quantities calculated from realistic cellular volumes. It is based on open-source oriented software that is publicly available.