993 resultados para Parallel Polarized Nd:YAG Laser
Resumo:
We show a method for parallelizing top down dynamic programs in a straightforward way by a careful choice of a lock-free shared hash table implementation and randomization of the order in which the dynamic program computes its subproblems. This generic approach is applied to dynamic programs for knapsack, shortest paths, and RNA structure alignment, as well as to a state-of-the-art solution for minimizing the máximum number of open stacks. Experimental results are provided on three different modern multicore architectures which show that this parallelization is effective and reasonably scalable. In particular, we obtain over 10 times speedup for 32 threads on the open stacks problem.
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
Resumo:
Seeding plasma-based softx-raylaser (SXRL) demonstrated diffraction-limited, fully coherent in space and in time beam but with energy not exceeding 1 μJ per pulse. Quasi-steady-state (QSS) plasmas demonstrated to be able to store high amount of energy and then amplify incoherent SXRL up to several mJ. Using 1D time-dependant Bloch–Maxwell model including amplification of noise, we demonstrated that femtosecond HHG cannot be efficiently amplified in QSS plasmas. However, using Chirped Pulse Amplification concept on HHG seed allows to extract most of the stored energy, reaching up to 5 mJ in fully coherent pulses that can be compressed down to 130 fs.
Resumo:
In this work we propose a method for cleaving silicon-based photonic chips by using a laser based micromachining system, consisting of a ND:YVO4laser emitting at 355 nm in nanosecond pulse regime and a micropositioning system. The laser makes grooved marks placed at the desired locations and directions where cleaves have to be initiated, and after several processing steps, a crack appears and propagate along the crystallographic planes of the silicon wafer. This allows cleavage of the chips automatically and with high positioning accuracy, and provides polished vertical facets with better quality than the obtained with other cleaving process, which eases the optical characterization of photonic devices. This method has been found to be particularly useful when cleaving small-sized chips, where manual cleaving is hard to perform; and also for polymeric waveguides, whose facets get damaged or even destroyed with polishing or manual cleaving processing. Influence of length of the grooved line and speed of processing is studied for a variety of silicon chips. An application for cleaving and characterizing sol–gel waveguides is presented. The total amount of light coupled is higher than when using any other procedure.
Resumo:
In this work we study the optimization of laser-fired contact (LFC) processing parameters, namely laser power and number of pulses, based on the electrical resistance measurement of an aluminum single LFC point. LFC process has been made through four passivation layers that are typically used in c-Si and mc-Si solar cell fabrication: thermally grown silicon oxide (SiO2), deposited phosphorus-doped amorphous silicon carbide (a-SiCx/H(n)), aluminum oxide (Al2O3) and silicon nitride (SiNx/H) films. Values for the LFC resistance normalized by the laser spot area in the range of 0.65–3 mΩ cm2 have been obtained
Resumo:
Continuous and long-pulse lasers have been used for the forming of metal sheets in macroscopic mechanical applications. However, for the manufacturing of micro-electromechanical systems (MEMS), the use of ns laser pulses provides a suitable parameter matching over an important range of sheet components that, preserving the short interaction time scale required for the predominantly mechanical (shock) induction of deformation residual stresses, allows for the successful processing of components in a medium range of miniaturization without appreciable thermal deformation.. In the present paper, the physics of laser shock microforming and the influence of the different experimental parameters on the net bending angle are presented.
Resumo:
The paper presents a consistent set of results showing the ability of Laser Shock Processing (LSP) in modifying the overall properties of the Friction Stir Welded (FSW) joints made of AA 2024-T351. Based on laser beam intensities above 109 W/cm2 with pulse energies of several Joules and pulses durations of nanoseconds, LSP is able of inducing a compression residual stress field, improving the wear and fatigue resistance by slowing crack propagation and stress corrosion cracking, but also improving the overall behaviour of the structure. After the FSW and LSP procedures are briefly presented, the results of micro-hardness measurements and of transverse tensile tests, together with the corrosion resistance of the native joints vs. LSP treated are discussed. The ability of LSP to generate compressive residual stresses and to improve the behaviour of the FSW joints is underscored.
Resumo:
There are several heat and mass diffusion problems which affect to the IFC chamber design. New simulation models and experiments are needed to take into account the extreme conditions due to ignition pulses and neutron flux
Resumo:
During the current preparatory phase of the European laser fusion project HiPER, an intensive effort has being placed to identify an armour material able to protect the internal walls of the chamber against the high thermal loads and high fluxes of x-rays and ions produced during the fusion explosions. This poster addresses the different threats and limitations of a poly-crystalline Tungsten armour. The analysis is carried out under the conditions of an experimental chamber hypothetically constructed to demonstrate laser fusion in a repetitive mode, subjected to a few thousand 48MJ shock ignition shots during its entire lifetime. If compared to the literature, an extrapolation of the thermomechanical and atomistic effects obtained from the simulations of the experimental chamber to the conditions of a Demo reactor (working 24/7 at hundreds of MW) or a future power plant (producing GW) suggests that “standard” tungsten will not be a suitable armour. Thus, new materials based on nano-structured W and C are being investigated as possible candidates. The research programme launched by the HiPER material team is introduced.
Resumo:
We have studied the thermo-mechanical response and atomistic degradation of final lenses in HiPER project. Final silica lenses are squares of 75 × 75 cm2 with a thickness of 5 cm. There are two scenarios where lenses are located at 8 m from the centre: •HiPER 4a, bunches of 100 shots (maximum 5 DT shots <48 MJ at ≈0.1 Hz). No blanket in chamber geometry. •HiPER 4b, continuous mode with shots ≈50 MJ at 10 Hz to generate 0.5 GW. Liquid metal blanket in chamber design.
Resumo:
This paper describes new improvements for BB-MaxClique (San Segundo et al. in Comput Oper Resour 38(2):571–581, 2011 ), a leading maximum clique algorithm which uses bit strings to efficiently compute basic operations during search by bit masking. Improvements include a recently described recoloring strategy in Tomita et al. (Proceedings of the 4th International Workshop on Algorithms and Computation. Lecture Notes in Computer Science, vol 5942. Springer, Berlin, pp 191–203, 2010 ), which is now integrated in the bit string framework, as well as different optimization strategies for fast bit scanning. Reported results over DIMACS and random graphs show that the new variants improve over previous BB-MaxClique for a vast majority of cases. It is also established that recoloring is mainly useful for graphs with high densities.
Resumo:
As an emerging optical material, graphene’s ultrafast dynamics are often probed using pulsed lasers yet the region in which optical damage takes place is largely uncharted. Here, femtosecond laser pulses induced localized damage in single-layer graphene on sapphire. Raman spatial mapping, SEM, and AFM microscopy quantified the damage. The resulting size of the damaged area has a linear correlation with the optical fluence. These results demonstrate local modification of sp2-carbon bonding structures with optical pulse fluences as low as 14 mJ/cm2, an order-of-magnitude lower than measured and theoretical ablation thresholds.
Resumo:
Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.
Resumo:
We have developed a new projector model specifically tailored for fast list-mode tomographic reconstructions in Positron emission tomography (PET) scanners with parallel planar detectors. The model provides an accurate estimation of the probability distribution of coincidence events defined by pairs of scintillating crystals. This distribution is parameterized with 2D elliptical Gaussian functions defined in planes perpendicular to the main axis of the tube of response (TOR). The parameters of these Gaussian functions have been obtained by fitting Monte Carlo simulations that include positron range, acolinearity of gamma rays, as well as detector attenuation and scatter effects. The proposed model has been applied efficiently to list-mode reconstruction algorithms. Evaluation with Monte Carlo simulations over a rotating high resolution PET scanner indicates that this model allows to obtain better recovery to noise ratio in OSEM (ordered-subsets, expectation-maximization) reconstruction, if compared to list-mode reconstruction with symmetric circular Gaussian TOR model, and histogram-based OSEM with precalculated system matrix using Monte Carlo simulated models and symmetries.
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.