972 resultados para heavy-ion beam


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thin disk and fiber lasers are new solid-state laser technologies that offer a combinationof high beam quality and a wavelength that is easily absorbed by metal surfacesand are expected to challenge the CO2 and Nd:YAG lasers in cutting of metals ofthick sections (thickness greater than 2mm). This thesis studied the potential of the disk and fiber lasers for cutting applications and the benefits of their better beam quality. The literature review covered the principles of the disk laser, high power fiber laser, CO2 laser and Nd:YAG laser as well as the principle of laser cutting. The cutting experiments were made with thedisk, fiber and CO2 lasers using nitrogen as an assist gas. The test material was austenitic stainless steel of sheet thickness 1.3mm, 2.3mm, 4.3mm and 6.2mm for the disk and fiber laser cutting experiments and sheet thickness of 1.3mm, 1.85mm, 4.4mm and 6.4mm for the CO2 laser cutting experiments. The experiments focused on the maximum cutting speeds with appropriate cut quality. Kerf width, cutedge perpendicularity and surface roughness were the cut characteristics used to analyze the cut quality. Attempts were made to draw conclusions on the influence of high beam quality on the cutting speed and cut quality. The cutting speeds were enormous for the disk and fiber laser cutting experiments with the 1.3mm and 2.3mm sheet thickness and the cut quality was good. The disk and fiber laser cutting speeds were lower at 4.3mm and 6.2mm sheet thickness but there was still a considerable percentage increase in cutting speeds compared to the CO2 laser cutting speeds at similar sheet thickness. However, the cut quality for 6.2mm thickness was not very good for the disk and fiber laser cutting experiments but could probably be improved by proper selection of cutting parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work describes a fast gas chromatography/negative-ion chemical ionization tandem mass spectrometric assay (Fast GC/NICI-MS/MS) for analysis of tetrahydrocannabinol (THC), 11-hydroxy-tetrahydrocannabinol (THC-OH) and 11-nor-9-carboxy-tetrahydrocannabinol (THC-COOH) in whole blood. The cannabinoids were extracted from 500 microL of whole blood by a simple liquid-liquid extraction (LLE) and then derivatized by using trifluoroacetic anhydride (TFAA) and hexafluoro-2-propanol (HFIP) as fluorinated agents. Mass spectrometric detection of the analytes was performed in the selected reaction-monitoring mode on a triple quadrupole instrument after negative-ion chemical ionization. The assay was found to be linear in the concentration range of 0.5-20 ng/mL for THC and THC-OH, and of 2.5-100 ng/mL for THC-COOH. Repeatability and intermediate precision were found less than 12% for all concentrations tested. Under standard chromatographic conditions, the run cycle time would have been 15 min. By using fast conditions of separation, the assay analysis time has been reduced to 5 min, without compromising the chromatographic resolution. Finally, a simple approach for estimating the uncertainty measurement is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ortogonaalisen M-kaistaisen moniresoluutioanalyysin matemaattiset perusteet esitetään yksityiskohtaisesti. Coifman-aallokkeiden määritelmä yleistetään dilaatiokertoimelle M ja nollasta poikkeavalle häviävien momenttien keskukselle.Funktion approksimointia näytepisteistä aallokkeiden avulla pohditaan ja erityisesti esitetään approksimaation asymptoottinen virhearvio Coifman-aallokkeille. Skaalaussuotimelle osoitetaan välttämättömät ja riittävät ehdot, jotka johtavat yleistettyihin Coifman-aallokkeisiin. Moniresoluutioanalyysin tiheys todistetaansuoraan Lebesguen integraalin määritelmään perustuen yksikön partitio-ominaisuutta käyttäen. Todistus on riittävä sellaisenaan avaruudessa L2(Wd) käyttämättä Fourier-tason ominaisuuksia tai ehtoja. Mallatin algoritmi johdetaan M-kaistaisille aallokkeille ja moniuloitteisille signaaleille. Algoritmille esitetään myös rekursiivinen muoto. Differentiaalievoluutioalgoritmin avulla ratkaistaan Coifman-aallokkeisiin liittyvien skaalaussuotimien kertoimien arvoja useille skaalausfunktiolle. Approksimaatio- ja kuvanpakkausesimerkkejä esitetään menetelmien havainnollistamiseksi. Differentiaalievoluutioalgoritmin avulla etsitään myös referenssikuville optimoitu skaalaussuodin. Löydetty suodin on regulaarinen ja erittäinsymmetrinen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epithelial Na+ channel (ENaC) is highly selective for Na+ and Li+ over K+ and is blocked by the diuretic amiloride. ENaC is a heterotetramer made of two alpha, one beta, and one gamma homologous subunits, each subunit comprising two transmembrane segments. Amino acid residues involved in binding of the pore blocker amiloride are located in the pre-M2 segment of beta and gamma subunits, which precedes the second putative transmembrane alpha helix (M2). A residue in the alpha subunit (alphaS589) at the NH2 terminus of M2 is critical for the molecular sieving properties of ENaC. ENaC is more permeable to Li+ than Na+ ions. The concentration of half-maximal unitary conductance is 38 mM for Na+ and 118 mM for Li+, a kinetic property that can account for the differences in Li+ and Na+ permeability. We show here that mutation of amino acid residues at homologous positions in the pre-M2 segment of alpha, beta, and gamma subunits (alphaG587, betaG529, gammaS541) decreases the Li+/Na+ selectivity by changing the apparent channel affinity for Li+ and Na+. Fitting single-channel data of the Li+ permeation to a discrete-state model including three barriers and two binding sites revealed that these mutations increased the energy needed for the translocation of Li+ from an outer ion binding site through the selectivity filter. Mutation of betaG529 to Ser, Cys, or Asp made ENaC partially permeable to K+ and larger ions, similar to the previously reported alphaS589 mutations. We conclude that the residues alphaG587 to alphaS589 and homologous residues in the beta and gamma subunits form the selectivity filter, which tightly accommodates Na+ and Li+ ions and excludes larger ions like K+.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS: Managing patients with alcohol dependence includes assessment for heavy drinking, typically by asking patients. Some recommend biomarkers to detect heavy drinking but evidence of accuracy is limited. METHODS: Among people with dependence, we assessed the performance of disialo-carbohydrate-deficient transferrin (%dCDT, ≥1.7%), gamma-glutamyltransferase (GGT, ≥66 U/l), either %dCDT or GGT positive, and breath alcohol (> 0) for identifying 3 self-reported heavy drinking levels: any heavy drinking (≥4 drinks/day or >7 drinks/week for women, ≥5 drinks/day or >14 drinks/week for men), recurrent (≥5 drinks/day on ≥5 days) and persistent heavy drinking (≥5 drinks/day on ≥7 consecutive days). Subjects (n = 402) with dependence and current heavy drinking were referred to primary care and assessed 6 months later with biomarkers and validated self-reported calendar method assessment of past 30-day alcohol use. RESULTS: The self-reported prevalence of any, recurrent and persistent heavy drinking was 54, 34 and 17%. Sensitivity of %dCDT for detecting any, recurrent and persistent self-reported heavy drinking was 41, 53 and 66%. Specificity was 96, 90 and 84%, respectively. %dCDT had higher sensitivity than GGT and breath test for each alcohol use level but was not adequately sensitive to detect heavy drinking (missing 34-59% of the cases). Either %dCDT or GGT positive improved sensitivity but not to satisfactory levels, and specificity decreased. Neither a breath test nor GGT was sufficiently sensitive (both tests missed 70-80% of cases). CONCLUSIONS: Although biomarkers may provide some useful information, their sensitivity is low the incremental value over self-report in clinical settings is questionable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation is proposed. The nonlinear elastic forces of the beam element are obtained using a continuum mechanics approach without employing a local element coordinate system. In this study, linear polynomials are used to interpolate both the transverse and longitudinal components of the displacement. This is different from other absolute nodal-coordinate-based beam elements where cubic polynomials are used in the longitudinal direction. The accompanying defects of the phenomenon known as shear locking are avoided through the adoption of selective integration within the numerical integration method. The proposed element is verified using several numerical examples, and the results are compared to analytical solutions and the results for an existing shear deformable beam element. It is shown that by using the proposed element, accurate linear and nonlinear static deformations, as well as realistic dynamic behavior, can be achieved with a smaller computational effort than by using existing shear deformable two-dimensional beam elements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic behavior of bothisothermal and non-isothermal single-column chromatographic reactors with an ion-exchange resin as the stationary phase was investigated. The reactor performance was interpreted by using results obtained when studying the effect of the resin properties on the equilibrium and kinetic phenomena occurring simultaneously in the reactor. Mathematical models were derived for each phenomenon and combined to simulate the chromatographic reactor. The phenomena studied includes phase equilibria in multicomponent liquid mixture¿ion-exchange resin systems, chemicalequilibrium in the presence of a resin catalyst, diffusion of liquids in gel-type and macroporous resins, and chemical reaction kinetics. Above all, attention was paid to the swelling behavior of the resins and how it affects the kinetic phenomena. Several poly(styrene-co-divinylbenzene) resins with different cross-link densities and internal porosities were used. Esterification of acetic acid with ethanol to produce ethyl acetate and water was used as a model reaction system. Choosing an ion-exchange resin with a low cross-link density is beneficial inthe case of the present reaction system: the amount of ethyl acetate as well the ethyl acetate to water mole ratio in the effluent stream increase with decreasing cross-link density. The enhanced performance of the reactor is mainly attributed to increasing reaction rate, which in turn originates from the phase equilibrium behavior of the system. Also mass transfer considerations favor the use ofresins with low cross-link density. The diffusion coefficients of liquids in the gel-type ion-exchange resins were found to fall rapidly when the extent of swelling became low. Glass transition of the polymer was not found to significantlyretard the diffusion in sulfonated PS¿DVB ion-exchange resins. It was also shown that non-isothermal operation of a chromatographic reactor could be used to significantly enhance the reactor performance. In the case of the exothermic modelreaction system and a near-adiabatic column, a positive thermal wave (higher temperature than in the initial state) was found to travel together with the reactive front. This further increased the conversion of the reactants. Diffusion-induced volume changes of the ion-exchange resins were studied in a flow-through cell. It was shown that describing the swelling and shrinking kinetics of the particles calls for a mass transfer model that explicitly includes the limited expansibility of the polymer network. A good description of the process was obtained by combining the generalized Maxwell-Stefan approach and an activity model that was derived from the thermodynamics of polymer solutions and gels. The swelling pressure in the resin phase was evaluated by using a non-Gaussian expression forthe polymer chain length distribution. Dimensional changes of the resin particles necessitate the use of non-standard mathematical tools for dynamic simulations. A transformed coordinate system, where the mass of the polymer was used as a spatial variable, was applied when simulating the chromatographic reactor columns as well as the swelling and shrinking kinetics of the resin particles. Shrinking of the particles in a column leads to formation of dead volume on top of the resin bed. In ordinary Eulerian coordinates, this results in a moving discontinuity that in turn causes numerical difficulties in the solution of the PDE system. The motion of the discontinuity was eliminated by spanning two calculation grids in the column that overlapped at the top of the resin bed. The reactive and non-reactive phase equilibrium data were correlated with a model derived from thethermodynamics of polymer solution and gels. The thermodynamic approach used inthis work is best suited at high degrees of swelling because the polymer matrixmay be in the glassy state when the extent of swelling is low.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The geometric characterisation of tree orchards is a high-precision activity comprising the accurate measurement and knowledge of the geometry and structure of the trees. Different types of sensors can be used to perform this characterisation. In this work a terrestrial LIDAR sensor (SICK LMS200) whose emission source was a 905-nm pulsed laser diode was used. Given the known dimensions of the laser beam cross-section (with diameters ranging from 12 mm at the point of emission to 47.2 mm at a distance of 8 m), and the known dimensions of the elements that make up the crops under study (flowers, leaves, fruits, branches, trunks), it was anticipated that, for much of the time, the laser beam would only partially hit a foreground target/object, with the consequent problem of mixed pixels or edge effects. Understanding what happens in such situations was the principal objective of this work. With this in mind, a series of tests were set up to determine the geometry of the emitted beam and to determine the response of the sensor to different beam blockage scenarios. The main conclusions that were drawn from the results obtained were: (i) in a partial beam blockage scenario, the distance value given by the sensor depends more on the blocked radiant power than on the blocked surface area; (ii) there is an area that influences the measurements obtained that is dependent on the percentage of blockage and which ranges from 1.5 to 2.5 m with respect to the foreground target/object. If the laser beam impacts on a second target/object located within this range, this will affect the measurement given by the sensor. To interpret the information obtained from the point clouds provided by the LIDAR sensors, such as the volume occupied and the enclosing area, it is necessary to know the resolution and the process for obtaining this mesh of points and also to be aware of the problem associated with mixed pixels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optical and electrical recovery processes of the metastable state of the EL2 defect artificially created in n‐type GaAs by boron or oxygen implantation are analyzed at 80 K using optical isothermal transient spectroscopy. In both cases, we have found an inhibition of the electrical recovery and the existence of an optical recovery in the range 1.1-1.4 eV, competing with the photoquenching effect. The similar results obtained with both elements and the different behavior observed in comparison with the native EL2 defect has been related to the network damage produced by the implantation process. From the different behavior with the technological process, it can be deduced that the electrical and optical anomalies have a different origin. The electrical inhibition is due to the existence of an interaction between the EL2 defect and other implantation‐created defects. However, the optical recovery seems to be related to a change in the microscopic metastable state configuration involving the presence of vacancies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.

Relevância:

20.00% 20.00%

Publicador: