103 resultados para Second-order nonlinearity
Resumo:
In conditional probabilistic logic programming, given a query, the two most common forms for answering the query are either a probability interval or a precise probability obtained by using the maximum entropy principle. The former can be noninformative (e.g.,interval [0; 1]) and the reliability of the latter is questionable when the priori knowledge isimprecise. To address this problem, in this paper, we propose some methods to quantitativelymeasure if a probability interval or a single probability is sufficient for answering a query. We first propose an approach to measuring the ignorance of a probabilistic logic program with respect to a query. The measure of ignorance (w.r.t. a query) reflects howreliable a precise probability for the query can be and a high value of ignorance suggests that a single probability is not suitable for the query. We then propose a method to measure the probability that the exact probability of a query falls in a given interval, e.g., a second order probability. We call it the degree of satisfaction. If the degree of satisfaction is highenough w.r.t. the query, then the given interval can be accepted as the answer to the query. We also prove our measures satisfy many properties and we use a case study to demonstrate the significance of the measures. © Springer Science+Business Media B.V. 2012
Resumo:
We investigate the acceleration of particles by Alfven waves via the second-order Fermi process in the lobes of giant radio galaxies. Such sites are candidates for the accelerators of ultra-high-energy cosmic rays (UHECR). We focus on the nearby Fanaroff-Riley type I radio galaxy Centaurus A. This is motivated by the coincidence of its position with the arrival direction of several of the highest energy Auger events. The conditions necessary for consistency with the acceleration time-scales predicted by quasi-linear theory are reviewed. Test particle calculations are performed in fields which guarantee electric fields with no component parallel to the local magnetic field. The results of quasi-linear theory are, to an order of magnitude, found to be accurate at low turbulence levels for non-relativistic Alfven waves and at both low and high turbulence levels in the mildly relativistic case. We conclude that for pure stochastic acceleration via Alfven waves to be plausible as the generator of UHECR in Cen A, the baryon number density would need to be several orders of magnitude below currently held upper limits.
Resumo:
Biosignal measurement and processing is increasingly being deployed in ambulatory situations particularly in connected health applications. Such an environment dramatically increases the likelihood of artifacts which can occlude features of interest and reduce the quality of information available in the signal. If multichannel recordings are available for a given signal source, then there are currently a considerable range of methods which can suppress or in some cases remove the distorting effect of such artifacts. There are, however, considerably fewer techniques available if only a single-channel measurement is available and yet single-channel measurements are important where minimal instrumentation complexity is required. This paper describes a novel artifact removal technique for use in such a context. The technique known as ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA) is capable of operating on single-channel measurements. The EEMD technique is first used to decompose the single-channel signal into a multidimensional signal. The CCA technique is then employed to isolate the artifact components from the underlying signal using second-order statistics. The new technique is tested against the currently available wavelet denoising and EEMD-ICA techniques using both electroencephalography and functional near-infrared spectroscopy data and is shown to produce significantly improved results. © 1964-2012 IEEE.
Resumo:
This paper contributes to and expands on the Nakagami-m phase model. It derives exact, closed-form expressions for both the phase cumulative distribution function and its inverse. In addition, empirical first- and second-order statistics obtained from measurements conducted in a body-area network scenario were used to fit the phase probability density function, the phase cumulative distribution function, and the phase crossing rate expressions. Remarkably, the unlikely shapes of the phase statistics, as predicted by the theoretical formulations, are actually encountered in practice.
Resumo:
We investigate the backflow of information in a system with a second-order structural phase transition, namely, a quasi-one-dimensional Coulomb crystal. Using standard Ramsey interferometry which couples a target ion (the system) to the rest of the chain (a phononic environment), we study the non-Markovian character of the resulting open system dynamics. We study two different time scales and show that the backflow of information pinpoints both the phase transition and different dynamical features of the chain as it approaches criticality. We also establish an exact link between the backflow of information and the Ramsey fringe visibility.
Resumo:
We employ the time-dependent R-matrix (TDRM) method to calculate anisotropy parameters for positive and negative sidebands of selected harmonics generated by two-color two-photon above-threshold ionization of argon. We consider odd harmonics of an 800-nm field ranging from the 13th to 19th harmonic, overlapped by a fundamental 800-nm IR field. The anisotropy parameters obtained using the TDRM method are compared with those obtained using a second-order perturbation theory with a model potential approach and a soft photon approximation approach. Where available, a comparison is also made to published experimental results. All three theoretical approaches provide similar values for anisotropy parameters. The TDRM approach obtains values that are closest to published experimental values. At high photon energies, the differences between each of the theoretical methods become less significant.
Resumo:
Microwave heating reduces the preparation time and improves the adsorption quality of activated carbon. In this study, activated carbon was prepared by impregnation of palm kernel fiber with phosphoric acid followed by microwave activation. Three different types of activated carbon were prepared, having high surface areas of 872 m2 g-1, 1256 m2 g-1, and 952 m2 g-1 and pore volumes of 0.598 cc g-1, 1.010 cc g-1, and 0.778 cc g-1, respectively. The combined effects of the different process parameters, such as the initial adsorbate concentration, pH, and temperature, on adsorption efficiency were explored with the help of Box-Behnken design for response surface methodology (RSM). The adsorption rate could be expressed by a polynomial equation as the function of the independent variables. The hexavalent chromium adsorption rate was found to be 19.1 mg g-1 at the optimized conditions of the process parameters, i.e., initial concentration of 60 mg L-1, pH of 3, and operating temperature of 50 oC. Adsorption of Cr(VI) by the prepared activated carbon was spontaneous and followed second-order kinetics. The adsorption mechanism can be described by the Freundlich Isotherm model. The prepared activated carbon has demonstrated comparable performance to other available activated carbons for the adsorption of Cr(VI).
Resumo:
We address the generation, propagation, and application of multipartite continuous variable entanglement in a noisy environment. In particular, we focus our attention on the multimode entangled states achievable by second-order nonlinear crystals-i.e., coherent states of the SU(m,1) group-which provide a generalization of the twin-beam state of a bipartite system. The full inseparability in the ideal case is shown, whereas thresholds for separability are given for the tripartite case in the presence of noise. We find that entanglement of tripartite states is robust against thermal noise, both in the generation process and during propagation. We then consider coherent states of SU(m,1) as a resource for multipartite distribution of quantum information and analyze a specific protocol for telecloning, proving its optimality in the case of symmetric cloning of pure Gaussian states. We show that the proposed protocol also provides the first example of a completely asymmetric 1 -> m telecloning and derive explicitly the optimal relation among the different fidelities of the m clones. The effect of noise in the various stages of the protocol is taken into account, and the fidelities of the clones are analytically obtained as a function of the noise parameters. In turn, this permits the optimization of the telecloning protocol, including its adaptive modifications to the noisy environment. In the optimized scheme the clones' fidelity remains maximal even in the presence of losses (in the absence of thermal noise), for propagation times that diverge as the number of modes increases. In the optimization procedure the prominent role played by the location of the entanglement source is analyzed in details. Our results indicate that, when only losses are present, telecloning is a more effective way to distribute quantum information than direct transmission followed by local cloning.
Resumo:
Taguchi method was applied to investigate the optimal operating conditions in the preparation of activated carbon using palm kernel shell with quadruple control factors: irradiation time, microwave power, concentration of phosphoric acid as impregnation substance and impregnation ratio between acid and palm kernel shell. The best combination of the control factors as obtained by applying Taguchi method was microwave power of 800 W, irradiation time of 17 min, impregnation ratio of 2, and acid concentration of 85%. The noise factor (particle size of raw material) was considered in a separate outer array, which had no effect on the quality of the activated carbon as confirmed by t-test. Activated carbon prepared at optimum combination of control factors had high BET surface area of 1,473.55 m² g-1 and high porosity. The adsorption equilibrium and kinetic data can satisfactorily be described by the Langmuir isotherm and a pseudo-second-order kinetic model, respectively. The maximum adsorbing capacity suggested by the Langmuir model was 1000 mg g-1.
Resumo:
This paper will consider the inter-relationship of a number of overlapping disciplinary theoretical concepts relevant to a strengths-based orientation, including well-being, salutogenesis, sense of coherence, quality of life and resilience. Psychological trauma will be referenced and the current evidence base for interventions with children and young people outlined and critiqued. The relational impact of trauma on family relationships is emphasised, providing a rationale for systemic psychotherapeutic interventions as part of a holistic approach to managing the effects of trauma. The congruence between second-order systemic psychotherapy models and a strengths-based philosophy is noted, with particular reference to solution-focused brief therapy and narrative therapy, and illustrated; via a description of the process of helping someone move from a victim position to a survivor identity using solution-focused brief therapy, and through a case example applying a narrative therapy approach to a teenage boy who suffered a serious assault. The benefits of a strength-based approach to psychological trauma for the clients and therapists will be summarised and a number of potential pitfalls articulated.
Resumo:
Quantitative monitoring of a mechanochemical reaction by Raman spectroscopy leads to a surprisingly straightforward second-order kinetic model in which the rate is determined simply by the frequency of reactive collisions between reactant particles.
Resumo:
In this paper we investigate the first and second order characteristics of the received signal at the output ofhypothetical selection, equal gain and maximal ratio combiners which utilize spatially separated antennas at the basestation. Considering a range of human body movements, we model the model the small-scale fading characteristics ofthe signal using diversity specific analytical equations which take into account the number of available signal branchesat the receiver. It is shown that these equations provide an excellent fit to the measured channel data. Furthermore, formany hypothetical diversity receiver configurations, the Nakagami-m parameter was found to be close to 1.
Resumo:
Radio-frequency (RF) impairments in the transceiver hardware of communication systems (e.g., phase noise (PN), high power amplifier (HPA) nonlinearities, or in-phase/quadrature-phase (I/Q) imbalance) can severely degrade the performance of traditional multiple-input multiple-output (MIMO) systems. Although calibration algorithms can partially compensate these impairments, the remaining distortion still has substantial impact. Despite this, most prior works have not analyzed this type of distortion. In this paper, we investigate the impact of residual transceiver hardware impairments on the MIMO system performance. In particular, we consider a transceiver impairment model, which has been experimentally validated, and derive analytical ergodic capacity expressions for both exact and high signal-to-noise ratios (SNRs). We demonstrate that the capacity saturates in the high-SNR regime, thereby creating a finite capacity ceiling. We also present a linear approximation for the ergodic capacity in the low-SNR regime, and show that impairments have only a second-order impact on the capacity. Furthermore, we analyze the effect of transceiver impairments on large-scale MIMO systems; interestingly, we prove that if one increases the number of antennas at one side only, the capacity behaves similar to the finite-dimensional case. On the contrary, if the number of antennas on both sides increases with a fixed ratio, the capacity ceiling vanishes; thus, impairments cause only a bounded offset in the capacity compared to the ideal transceiver hardware case.
Resumo:
As is now well established, a first order expansion of the Hohenberg-Kohn total energy density functional about a trial input density, namely, the Harris-Foulkes functional, can be used to rationalize a non self consistent tight binding model. If the expansion is taken to second order then the energy and electron density matrix need to be calculated self consistently and from this functional one can derive a charge self consistent tight binding theory. In this paper we have used this to describe a polarizable ion tight binding model which has the benefit of treating charge transfer in point multipoles. This admits a ready description of ionic polarizability and crystal field splitting. It is necessary in constructing such a model to find a number of parameters that mimic their more exact counterparts in the density functional theory. We describe in detail how this is done using a combination of intuition, exact analytical fitting, and a genetic optimization algorithm. Having obtained model parameters we show that this constitutes a transferable scheme that can be applied rather universally to small and medium sized organic molecules. We have shown that the model gives a good account of static structural and dynamic vibrational properties of a library of molecules, and finally we demonstrate the model's capability by showing a real time simulation of an enolization reaction in aqueous solution. In two subsequent papers, we show that the model is a great deal more general in that it will describe solvents and solid substrates and that therefore we have created a self consistent quantum mechanical scheme that may be applied to simulations in heterogeneous catalysis.
Resumo:
Arguably, the myth of Shakespeare is a myth of universality. Much has been written about the dramatic, thematic and ‘humanistic’ transference of Shakespeare’s works: their permeability, transcendence of cultures and histories, geographies and temporalities. Located within this debate is a belief that this universality, among other dominating factors, is founded upon the power and poeticism of Shakespeare’s language. Subsequently, if we acknowledge Frank Kermode’s assertion that “the life of the plays is the language” and “the secret (of Shakespeare’s works) is in the detail,” what then becomes of this myth of universality, and how is Shakespeare’s language ‘transferred’ across cultures? In Asian intercultural adaptations, language becomes the primary site of confrontation as issues of semantic accuracy and poetic affiliation abound. Often, the language of the text is replaced with a cultural equivalent or reconceived with other languages of the stage – song and dance, movement and music; metaphor and imagery consequently find new voices. Yet if myth is, as Roland Barthes propounds, a second-order semiotic system that is predicated upon the already constituted sign, here being language, and myth is parasitical on language, what happens to the myth of Shakespeare in these cultural re-articulations? Wherein lies the ‘universality’? Or is ‘universality’ all that it is – an insubstantial (mythical) pageant? Using Ong Keng Sen’s Search Hamlet (2002), this paper would examine the transference of myth and / as language in intercultural Shakespeares. If, as Barthes argues, myths are to be understood as metalanguages that adumbrate social hegemonies, intercultural imaginings of Shakespeare can be said to expose the hollow myth of universality yet in a paradoxical double-bind reify and reinstate this self-same myth.