888 resultados para Semigroups of Operators
Resumo:
Using the density matrix renormalization group, we calculated the finite-size corrections of the entanglement alpha-Renyi entropy of a single interval for several critical quantum chains. We considered models with U(1) symmetry such as the spin-1/2 XXZ and spin-1 Fateev-Zamolodchikov models, as well as models with discrete symmetries such as the Ising, the Blume-Capel, and the three-state Potts models. These corrections contain physically relevant information. Their amplitudes, which depend on the value of a, are related to the dimensions of operators in the conformal field theory governing the long-distance correlations of the critical quantum chains. The obtained results together with earlier exact and numerical ones allow us to formulate some general conjectures about the operator responsible for the leading finite-size correction of the alpha-Renyi entropies. We conjecture that the exponent of the leading finite-size correction of the alpha-Renyi entropies is p(alpha) = 2X(epsilon)/alpha for alpha > 1 and p(1) = nu, where X-epsilon denotes the dimensions of the energy operator of the model and nu = 2 for all the models.
Resumo:
In this article, we study the existence of mild solutions for fractional neutral integro-differential equations with infinite delay.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.
Resumo:
Given the weight sequence for a subnormal recursively generated weighted shift on Hilbert space, one approach to the study of classes of operators weaker than subnormal has been to form a backward extension of the shift by prefixing weights to the sequence. We characterize positive quadratic hyponormality and revisit quadratic hyponormality of certain such backward extensions of arbitrary length, generalizing earlier results, and also show that a function apparently introduced as a matter of convenience for quadratic hyponormality actually captures considerable information about positive quadratic hyponormality.
Resumo:
We establish the convergence of pseudospectra in Hausdorff distance for closed operators acting in different Hilbert spaces and converging in the generalised norm resolvent sense. As an assumption, we exclude the case that the limiting operator has constant resolvent norm on an open set. We extend the class of operators for which it is known that the latter cannot happen by showing that if the resolvent norm is constant on an open set, then this constant is the global minimum. We present a number of examples exhibiting various resolvent norm behaviours and illustrating the applicability of this characterisation compared to known results.
Resumo:
Heuristic methods are popular tools to find critical slip surfaces in slope stability analyses. A new genetic algorithm (GA) is proposed in this work that has a standard structure but a novel encoding and generation of individuals with custom-designed operators for mutation and crossover that produce kinematically feasible slip surfaces with a high probability. In addition, new indices to assess the efficiency of operators in their search for the minimum factor of safety (FS) are proposed. The proposed GA is applied to traditional benchmark examples from the literature, as well as to a new practical example. Results show that the proposed GA is reliable, flexible and robust: it provides good minimum FS estimates that are not very sensitive to the number of nodes and that are very similar for different replications
Resumo:
This paper presents a new approach to the delineation of local labor markets based on evolutionary computation. The aim of the exercise is the division of a given territory into functional regions based on travel-to-work flows. Such regions are defined so that a high degree of inter-regional separation and of intra-regional integration in both cases in terms of commuting flows is guaranteed. Additional requirements include the absence of overlap between delineated regions and the exhaustive coverage of the whole territory. The procedure is based on the maximization of a fitness function that measures aggregate intra-region interaction under constraints of inter-region separation and minimum size. In the experimentation stage, two variations of the fitness function are used, and the process is also applied as a final stage for the optimization of the results from one of the most successful existing methods, which are used by the British authorities for the delineation of travel-to-work areas (TTWAs). The empirical exercise is conducted using real data for a sufficiently large territory that is considered to be representative given the density and variety of travel-to-work patterns that it embraces. The paper includes the quantitative comparison with alternative traditional methods, the assessment of the performance of the set of operators which has been specifically designed to handle the regionalization problem and the evaluation of the convergence process. The robustness of the solutions, something crucial in a research and policy-making context, is also discussed in the paper.
Resumo:
The characterization of blood pressure in treatment trials assessing the benefits of blood pressure lowering regimens is a critical factor for the appropriate interpretation of study results. With numerous operators involved in the measurement of blood pressure in many thousands of patients being screened for entry into clinical trials, it is essential that operators follow pre-defined measurement protocols involving multiple measurements and standardized techniques. Blood pressure measurement protocols have been developed by international societies and emphasize the importance of appropriate choice of cuff size, identification of Korotkoff sounds, and digit preference. Training of operators and auditing of blood pressure measurement may assist in reducing the operator-related errors in measurement. This paper describes the quality control activities adopted for the screening stage of the 2nd Australian National Blood Pressure Study (ANBP2). ANBP2 is cardiovascular outcome trial of the treatment of hypertension in the elderly that was conducted entirely in general practices in Australia. A total of 54 288 subjects were screened; 3688 previously untreated subjects were identified as having blood pressure >140/90 mmHg at the initial screening visit, 898 (24%) were not eligible for study entry after two further visits due to the elevated reading not being sustained. For both systolic and diastolic blood pressure recording, observed digit preference fell within 7 percentage points of the expected frequency. Protocol adherence, in terms of the required minimum blood pressure difference between the last two successive recordings, was 99.8%. These data suggest that adherence to blood pressure recording protocols and elimination of digit preferences can be achieved through appropriate training programs and quality control activities in large multi-centre community-based trials in general practice. Repeated blood pressure measurement prior to initial diagnosis and study entry is essential to appropriately characterize hypertension in these elderly patients.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.
Resumo:
This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech. It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.
Resumo:
For a polish space M and a Banach space E let B1 (M, E) be the space of first Baire class functions from M to E, endowed with the pointwise weak topology. We study the compact subsets of B1 (M, E) and show that the fundamental results proved by Rosenthal, Bourgain, Fremlin, Talagrand and Godefroy, in case E = R, also hold true in the general case. For instance: a subset of B1 (M, E) is compact iff it is sequentially (resp. countably) compact, the convex hull of a compact bounded subset of B1 (M, E) is relatively compact, etc. We also show that our class includes Gulko compact. In the second part of the paper we examine under which conditions a bounded linear operator T : X ∗ → Y so that T |BX ∗ : (BX ∗ , w∗ ) → Y is a Baire-1 function, is a pointwise limit of a sequence (Tn ) of operators with T |BX ∗ : (BX ∗ , w∗ ) → (Y, · ) continuous for all n ∈ N. Our results in this case are connected with classical results of Choquet, Odell and Rosenthal.
Resumo:
A new, unified presentation of the ideal norms of factorization of operators through Banach lattices and related ideal norms is given.
Resumo:
Mathematics Subject Classification: 74D05, 26A33
Resumo:
2000 Mathematics Subject Classification: 46B28, 47D15.