923 resultados para set point temperatures
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
We study diagonal estimates for the Bergman kernels of certain model domains in C-2 near boundary points that are of infinite type. To do so, we need a mild structural condition on the defining functions of interest that facilitates optimal upper and lower bounds. This is a mild condition; unlike earlier studies of this sort, we are able to make estimates for non-convex pseudoconvex domains as well. Thisn condition quantifies, in some sense, how flat a domain is at an infinite-type boundary point. In this scheme of quantification, the model domains considered below range-roughly speaking-from being mildly infinite-type'' to very flat at the infinite-type points.
Resumo:
A compact model for noise margin (NM) of single-electron transistor (SET) logic is developed, which is a function of device capacitances and background charge (zeta). Noise margin is, then, used as a metric to evaluate the robustness of SET logic against background charge, temperature, and variation of SET gate and tunnel junction capacitances (CG and CT). It is shown that choosing alpha=CT/CG=1/3 maximizes the NM. An estimate of the maximum tolerable zeta is shown to be equal to plusmn0.03 e. Finally, the effect of mismatch in device parameters on the NM is studied through exhaustive simulations, which indicates that a isin [0.3, 0.4] provides maximum robustness. It is also observed that mismatch can have a significant impact on static power dissipation.
Resumo:
A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.
Multi-GNSS precise point positioning with raw single-frequency and dual-frequency measurement models
Resumo:
The emergence of multiple satellite navigation systems, including BDS, Galileo, modernized GPS, and GLONASS, brings great opportunities and challenges for precise point positioning (PPP). We study the contributions of various GNSS combinations to PPP performance based on undifferenced or raw observations, in which the signal delays and ionospheric delays must be considered. A priori ionospheric knowledge, such as regional or global corrections, strengthens the estimation of ionospheric delay parameters. The undifferenced models are generally more suitable for single-, dual-, or multi-frequency data processing for single or combined GNSS constellations. Another advantage over ionospheric-free PPP models is that undifferenced models avoid noise amplification by linear combinations. Extensive performance evaluations are conducted with multi-GNSS data sets collected from 105 MGEX stations in July 2014. Dual-frequency PPP results from each single constellation show that the convergence time of undifferenced PPP solution is usually shorter than that of ionospheric-free PPP solutions, while the positioning accuracy of undifferenced PPP shows more improvement for the GLONASS system. In addition, the GLONASS undifferenced PPP results demonstrate performance advantages in high latitude areas, while this impact is less obvious in the GPS/GLONASS combined configuration. The results have also indicated that the BDS GEO satellites have negative impacts on the undifferenced PPP performance given the current “poor” orbit and clock knowledge of GEO satellites. More generally, the multi-GNSS undifferenced PPP results have shown improvements in the convergence time by more than 60 % in both the single- and dual-frequency PPP results, while the positioning accuracy after convergence indicates no significant improvements for the dual-frequency PPP solutions, but an improvement of about 25 % on average for the single-frequency PPP solutions.
Resumo:
Bulk Ge15Te83Si2 glass has been found to exhibit memory-type switching for 1 mA current with a threshold electric field of 7.3 kV/cm. The electrical set and reset processes have been achieved with triangular and rectangular pulses, respectively, of 1 mA amplitude. In situ Raman scattering studies indicate that the degree of disorder in Ge15Te83Si2 glass is reduced from off to set state. The local structure of the sample under reset condition is similar to that in the off state. The Raman results are consistent with the switching results which indicate that the Ge15Te83Si2 glass can be set and reset easily. (C) 2007 American Institute of Physics.
Resumo:
Restriction endonucleases (REases) protect bacteria from invading foreign DNAs and are endowed with exquisite sequence specificity. REases have originated from the ancestral proteins and evolved new sequence specificities by genetic recombination, gene duplication, replication slippage, and transpositional events. They are also speculated to have evolved from nonspecific endonucleases, attaining a high degree of sequence specificity through point mutations. We describe here an example of generation of exquisitely site-specific REase from a highly-promiscuous one by a single point mutation.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.
Resumo:
The coherent quantum evolution of a one-dimensional many-particle system after slowly sweeping the Hamiltonian through a critical point is studied using a generalized quantum Ising model containing both integrable and nonintegrable regimes. It is known from previous work that universal power laws of the sweep rate appear in such quantities as the mean number of excitations created by the sweep. Several other phenomena are found that are not reflected by such averages: there are two different scaling behaviors of the entanglement entropy and a relaxation that is power law in time rather than exponential. The final state of evolution after the quench is not characterized by any effective temperature, and the Loschmidt echo converges algebraically for long times, with cusplike singularities in the integrable case that are dynamically broadened by nonintegrable perturbations.
Resumo:
Average-delay optimal scheduflng of messages arriving to the transmitter of a point-to-point channel is considered in this paper. We consider a discrete time batch-arrival batch-service queueing model for the communication scheme, with service time that may be a function of batch size. The question of delay optimality is addressed within the semi-Markov decision-theoretic framework. Approximations to the average-delay optimal policy are obtained.
Resumo:
We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases.
Resumo:
Let X be a geometrically irreductble smooth projective cruve defined over R. of genus at least 2. that admits a nontrivial automorphism, sigma. Assume that X does not have any real points. Let tau be the antiholomorphic involution of the complexification lambda(C) of X. We show that if the action of sigma on the set S(X) of all real theta characteristics of X is trivial. then the order of sigma is even, say 2k and the automorphism tau o (sigma) over cap (lambda) of X-C has a fixed point, where (sigma) over cap is the automorphism of X x C-R defined by sigma We then show that there exists X with a real point and admitting a nontrivial automorphism sigma, such that the action of sigma on S(X) is trivial, while X/
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Resumo:
The potential to cultivate new relationships with spectators has long been cited as a primary motivator for those using digital technologies to construct networked or telematics performances or para-performance encounters in which performers and spectators come together in virtual – or at least virtually augmented – spaces and places. Today, with Web 2.0 technologies such as social media platforms becoming increasingly ubiquitous, and increasingly easy to use, more and more theatre makers are developing digitally mediated relationships with spectators. Sometimes for the purpose of an aesthetic encounter, sometimes for critical encounter, or sometimes as part of an audience politicisation, development or engagement agenda. Sometimes because this is genuinely an interest, and sometimes because spectators or funding bodies expect at least some engagement via Facebook, Twitter or Instagram. In this paper, I examine peculiarities and paradoxes emerging in some of these efforts to engage spectators via networked performance or para-performance encounters. I use examples ranging from theatre, to performance art, to political activism – from ‘cyberformaces’ on Helen Varley Jamieson’s Upstage Avatar Performance Platform, to Wafaa Bilal’s Domestic Tension installation where spectators around the world could use a webcam in a chat room to target him with paintballs while he was in residence in a living room set up in a gallery for a week, as a comment on use of drone technology in war, to Liz Crow’s Bedding Out where she invited people to physically and virtually join her in her bedroom to discuss the impact of an anti-disabled austerity politics emerging in her country, to Dislife’s use of holograms of disabled people popping up in disabled parking spaces when able bodied drivers attempted to pull into them, amongst others. I note the frequency with which these performance practices deploy discourses of democratisation, participation, power and agency to argue that these technologies assist in positioning spectators as co-creators actively engaged in the evolution of a performance (and, in politicised pieces that point to racism, sexism, or ableism, pushing spectators to reflect on their agency in that dramatic or daily-cum-dramatic performance of prejudice). I investigate how a range of issues – from the scenographic challenges in deploying networked technologies for both participant and bystander audiences others have already noted, to the siloisation of aesthetic, critical and audience activation activities on networked technologies, to conventionalised dramaturgies of response informed by power, politics and impression management that play out in online as much as offline performances, to the high personal, social and professional stakes involved in participating in a form where spectators responses are almost always documented, recorded and re-represented to secondary and tertiary sets of spectators via the circulation into new networks social media platforms so readily facilitate – complicate discourses of democratic co-creativity associated with networked performance and para-performance activities.