978 resultados para Manipulation néonatale


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study driven by an external electric field quantum orbital and spin dynamics of electron in a one-dimensional double quantum dot with spin-orbit coupling. Two types of external perturbation are considered: a periodic field at the Zeeman frequency and a single half-period pulse. Spin-orbit coupling leads to a nontrivial evolution in the spin and orbital channels and to a strongly spin-dependent probability density distribution. Both the interdot tunneling and the driven motion contribute into the spin evolution. These results can be important for the design of the spin manipulation schemes in semiconductor nanostructures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

302 p. : gráf.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To improve the cod stocks in the Baltic Sea, a number of regulations have recently been established by the International Baltic Sea Fisheries Commission (IBSFC) and the European Commission. According to these, fishermen are obliged to use nets with escape windows (BACOMA nets) with a mesh size of the escape window of 120 mm until end of September 2003. These nets however, retain only fish much larger than the legal minimum landing size would al-low. Due to the present stock structure only few of such large fish are however existent. As a consequence fishermen use a legal alternative net. This is a conventional trawl with a cod-end of 130 mm diamond-shaped meshes (IBSFC-rules of 1st April 2002), to be increased to 140 mm on 1st September 2003, according to the mentioned IBSFC-rule. Due legal alterations of the net by the fishermen (e.g. use of extra stiff net material) these nets have acquired extremely low selective properties, i. e. they catch very small fish and produce great amounts of discards. Due to the increase of the minimum landing size from 35 to 38 cm for cod in the Baltic, the amount of discards has even increased since the beginning of 2003. Experiments have now been carried out with the BACOMAnet on German and Swedish commercial and research vessels since arguments were brought forward that the BACOMA net was not yet sufficiently tested on commercial vessels. The results of all experiments conducted so far, are compiled and evaluated here. As a result of the Swedish, Danish and German initiative and research the European Commission reacted upon this in June 2003 and rejected the increase of the diamond-meshed non-BACOMA net from 130 mm to 140mm in September 2003. To protect the cod stocks in the Baltic Sea more effectively the use of traditional diamond meshed cod-ends with-out escape window are prohibited in community waters without derogation, becoming effective 1st of September 2003. To enable more effective and simplified control of the bottom trawl fishery in the Baltic Sea the principle of a ”One-Net-Rule“ is enforced. This is going to be the BACOMA net, with the meshes of the escape window being 110 mm for the time being. The description of the BACOMA net as given in the IBSFC-rules no.10 (revision of the 28th session, Berlin 2002) concentrates on the cod-end and the escape window but only to a less extent on the design and mesh-composition of the remaining parts of the net, such as belly and funnel and many details. Thus, the present description is not complete and leaves, according to fishermen, ample opportunity for manipulation. An initiative has been started in Germany with joint effort from scientists and the fishery to better describe the entire net and to produce a proposal for a more comprehensive description, leaving less space for manipulation. A proposal in this direction is given here and shall be seen as a starting point for a discussion and development towards an internationally uniform net, which is agreed amongst the fishery, scientists and politicians. The Baltic Sea fishery is invited to comment on this proposal, and recommendations for further improvement and specifications are welcomed. Once the design is agreed by the Baltic Fishermen Association, it shall be proposed to the IBSFC and European Commission via the Baltic Fishermen Association.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A discussion is presented on the potential for fishery development in the Niger Delta region, considering engineering activities and food production potentials of the freshwater zone and immediate hinterland, the brackishwater mangrove swamps and the estuaries. An examination of current trends in the environment indicates that a possible solution to improved exploitation of the region lies in hydraulic engineering, the manipulation of environmental conditions through varying freshwater and seawater inputs so as to increase aquatic and wetland productivity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study examines the integration of cultural, economic and environmental requirements for fish production in Borno State, Nigeria. A reconnaissance survey was conducted transferring some selected Local Government Areas. 60 questionnaires were administered in the six Local Governments representing Southern Borno State with Biu and Shani, central Borno with Konduga & Jere and Northern Borno with Gubia and Kukawa respectively. There is no cultural constraint to fish production but about 63% prefers to invest in other farming activities than in fish farming. 33% are not aware that fish can be cultured apart from getting it from the wild. 35% have the impression that fish farming ventures can be handled by government only. The economic earnings for fish production are high especially in some parts of Northern Borno, but the Local market potentials throughout the state are great. Nigeria has suitable soil for ponds apart from few locations at the central and Northern Borno that are made by sandy soil. Numerous perennial and seasonal rivers, streams, lakes, pools and flood plains adequate for fish culture especially in Southern Borno exist. The mean annual rainfall can result in some water storage in ponds. In areas where the annual precipitation is less than 550mm, exist few flow boreholes with potentials for fish production. The temperature regime may support growth and survival of fish even during the hottest months of the year (March, April and May). With the understanding and manipulation of these requirements, fish production in Nigeria can be greatly enhanced

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an efficient photorefractive volume hologram recording technique with a pulsed signal beam and continuous reference-beam illumination. The grating envelope can be simply controlled by manipulation of the duty cycle of the signal beam. Thus, for any grating coupling strength and different initial reference-signal intensity ratios, the diffraction efficiency can be maximized with this technique and can be greatly increased in comparison with that of the conventional recording technique. (C) 1998 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using the technique of stimulated Raman adiabatic passage, we propose schemes for creating arbi- trary coherent superposition states of atoms in four-level systems: a A-type system with twofold final states and a four-level ladder system. With the use of a control field, arbitrary coherent superposition states are created without the condition of multiphoton resonance. Suitable manipulation of detunings and the control field can create either a single state or any superposition states desired. (c) 2005 Pleiades Publishing, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heparin has been used as an anticoagulant drug for more than 70 years. The global distribution of contaminated heparin in 2007, which resulted in adverse clinical effects and over 100 deaths, emphasizes the necessity for safer alternatives to animal-sourced heparin. The structural complexity and heterogeneity of animal-sourced heparin not only impedes safe access to these biologically active molecules, but also hinders investigations on the significance of structural constituents at a molecular level. Efficient methods for preparing new synthetic heparins with targeted biological activity are necessary not only to ensure clinical safety, but to optimize derivative design to minimize potential side effects. Low molecular weight heparins have become a reliable alternative to heparin, due to their predictable dosages, long half-lives, and reduced side effects. However, heparin oligosaccharide synthesis is a challenging endeavor due to the necessity for complex protecting group manipulation and stereoselective glycosidic linkage chemistry, which often result in lengthy synthetic routes and low yields. Recently, chemoenzymatic syntheses have produced targeted ultralow molecular weight heparins with high-efficiency, but continue to be restricted by the substrate specificities of enzymes.

To address the need for access to homogeneous, complex glycosaminoglycan structures, we have synthesized novel heparan sulfate glycopolymers with well-defined carbohydrate structures and tunable chain length through ring-opening metathesis polymerization chemistry. These polymers recapitulate the key features of anticoagulant heparan sulfate by displaying the sulfation pattern responsible for heparin’s anticoagulant activity. The use of polymerization chemistry greatly simplifies the synthesis of complex glycosaminoglycan structures, providing a facile method to generate homogeneous macromolecules with tunable biological and chemical properties. Through the use of in vitro chromogenic substrate assays and ex vivo clotting assays, we found that the HS glycopolymers exhibited anticoagulant activity in a sulfation pattern and length-dependent manner. Compared to heparin standards, our short polymers did not display any activity. However, our longer polymers were able to incorporate in vitro and ex vivo characteristics of both low-molecular-weight heparin derivatives and heparin, displaying hybrid anticoagulant properties. These studies emphasize the significance of sulfation pattern specificity in specific carbohydrate-protein interactions, and demonstrate the effectiveness of multivalent molecules in recapitulating the activity of natural polysaccharides.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the Kerr nonlinearity of a V-type three-level atomic system where the upper two states decay outside to another state and hence spontaneous generated coherence may exist. It is shown that dark state and hence perfect transparency present under certain conditions. Meanwhile, the Kerr nonlinearity can be controlled by manipulation of the decay rates and the splitting of the two excited states. Therefore, enhanced Kerr nonlinearity without absorption can be obtained under proper parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DNA damage is extremely detrimental to the cell and must be repaired to protect the genome. DNA is capable of conducting charge through the overlapping π-orbitals of stacked bases; this phenomenon is extremely sensitive to the integrity of the π-stack, as perturbations attenuate DNA charge transport (CT). Based on the E. coli base excision repair (BER) proteins EndoIII and MutY, it has recently been proposed that redox-active proteins containing metal clusters can utilize DNA CT to signal one another to locate sites of DNA damage.

To expand our repertoire of proteins that utilize DNA-mediated signaling, we measured the DNA-bound redox potential of the nucleotide excision repair (NER) helicase XPD from Sulfolobus acidocaldarius. A midpoint potential of 82 mV versus NHE was observed, resembling that of the previously reported BER proteins. The redox signal increases in intensity with ATP hydrolysis in only the WT protein and mutants that maintain ATPase activity and not for ATPase-deficient mutants. The signal increase correlates directly with ATP activity, suggesting that DNA-mediated signaling may play a general role in protein signaling. Several mutations in human XPD that lead to XP-related diseases have been identified; using SaXPD, we explored how these mutations, which are conserved in the thermophile, affect protein electrochemistry.

To further understand the electrochemical signaling of XPD, we studied the yeast S. cerevisiae Rad3 protein. ScRad3 mutants were incubated on a DNA-modified electrode and exhibited a similar redox potential to SaXPD. We developed a haploid strain of S. cerevisiae that allowed for easy manipulation of Rad3. In a survival assay, the ATPase- and helicase-deficient mutants show little survival, while the two disease-related mutants exhibit survival similar to WT. When both a WT and G47R (ATPase/helicase deficient) strain were challenged with different DNA damaging agents, both exhibited comparable survival in the presence of hydroxyurea, while with methyl methanesulfonate and camptothecin, the G47R strain exhibits a significant change in growth, suggesting that Rad3 is involved in repairing damage beyond traditional NER substrates. Together, these data expand our understanding of redox-active proteins at the interface of DNA repair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the theoretical analysis and the numerical modeling of optical levitation and trapping of the stuck particles with a pulsed optical tweezers. In our model, a pulsed laser was used to generate a large gradient force within a short duration that overcame the adhesive interaction between the stuck particles and the surface; and then a low power continuous - wave (cw) laser was used to capture the levitated particle. We describe the gradient force generated by the pulsed optical tweezers and model the binding interaction between the stuck beads and glass surface by the dominative van der Waals force with a randomly distributed binding strength. We numerically calculate the single pulse levitation efficiency for polystyrene beads as the function of the pulse energy, the axial displacement from the surface to the pulsed laser focus and the pulse duration. The result of our numerical modeling is qualitatively consistent with the experimental result. (C) 2005 Optical Society of America.