945 resultados para weights of ideals


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract. We prove that the vast majority of JC∗-triples satisfy the condition of universal reversibility. Our characterisation is that a JC∗-triple is universally reversible if and only if it has no triple homomorphisms onto Hilbert spaces of dimension greater than two nor onto spin factors of dimension greater than four. We establish corresponding characterisations in the cases of JW∗-triples and of TROs (regarded as JC∗-triples). We show that the distinct natural operator space structures on a universally reversible JC∗-triple E are in bijective correspondence with a distinguished class of ideals in its universal TRO, identify the Shilov boundaries of these operator spaces and prove that E has a unique natural operator space structure precisely when E contains no ideal isometric to a nonabelian TRO. We deduce some decomposition and completely contractive properties of triple homomorphisms on TROs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a novel online modeling algorithm for nonlinear and nonstationary systems using a radial basis function (RBF) neural network with a fixed number of hidden nodes. Each of the RBF basis functions has a tunable center vector and an adjustable diagonal covariance matrix. A multi-innovation recursive least square (MRLS) algorithm is applied to update the weights of RBF online, while the modeling performance is monitored. When the modeling residual of the RBF network becomes large in spite of the weight adaptation, a node identified as insignificant is replaced with a new node, for which the tunable center vector and diagonal covariance matrix are optimized using the quantum particle swarm optimization (QPSO) algorithm. The major contribution is to combine the MRLS weight adaptation and QPSO node structure optimization in an innovative way so that it can track well the local characteristic in the nonstationary system with a very sparse model. Simulation results show that the proposed algorithm has significantly better performance than existing approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Document design and typeface design: A typographic specification for a new Intermediate Greek-English Lexicon by CUP, accompanied by typefaces modified for the specific typographic requirements of the text. The Lexicon is a substantial (over 1400 pages) publication for HE students and academics intended to complement Liddell-Scott (the standard reference for classical Greek since the 1850s), and has been in preparation for over a decade. The typographic appearance of such works has changed very little since the original editions, largely to the lack of suitable typefaces: early digital proofs of the Lexicon utilised directly digitised versions of historical typefaces, making the entries difficult to navigate, and the document uneven in typographic texture. Close collaboration with the editors of the Lexicon, and discussion of the historical precedents for such documents informed the design at all typographic levels to achieve a highly reader-friendly results that propose a model for this kind of typography. Uniquely for a work of this kind, typeface design decisions were integrated into the wider document design specification. A rethinking of the complex typography for Greek and English based on historical editions as well as equivalent bilingual reference works at this level (from OUP, CUP, Brill, Mondadori, and other publishers) led a redefinition of multi-script typeface pairing for the specific context, taking into account recent developments in typeface design. Specifically, the relevant weighting of elements within each entry were redefined, as well as the typographic texture of type styles across the two scripts. In details, Greek typefaces were modified to emphasise clarity and readability, particularly of diacritics, at very small sizes. The relative weights of typefaces typeset side-by-side were fine-tuned so that the visual hierarchy of the entires was unambiguous despite the dense typesetting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two recent works have adapted the Kalman–Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance. We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables. Finally, the performance of our ensemble transform Kalman–Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We are looking into variants of a domination set problem in social networks. While randomised algorithms for solving the minimum weighted domination set problem and the minimum alpha and alpha-rate domination problem on simple graphs are already present in the literature, we propose here a randomised algorithm for the minimum weighted alpha-rate domination set problem which is, to the best of our knowledge, the first such algorithm. A theoretical approximation bound based on a simple randomised rounding technique is given. The algorithm is implemented in Python and applied to a UK Twitter mentions networks using a measure of individuals’ influence (klout) as weights. We argue that the weights of vertices could be interpreted as the costs of getting those individuals on board for a campaign or a behaviour change intervention. The minimum weighted alpha-rate dominating set problem can therefore be seen as finding a set that minimises the total cost and each individual in a network has at least alpha percentage of its neighbours in the chosen set. We also test our algorithm on generated graphs with several thousand vertices and edges. Our results on this real-life Twitter networks and generated graphs show that the implementation is reasonably efficient and thus can be used for real-life applications when creating social network based interventions, designing social media campaigns and potentially improving users’ social media experience.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the groundwater favorability mapping on a fractured terrain in the eastern portion of Sao Paulo State, Brazil. Remote sensing, airborne geophysical data, photogeologic interpretation, geologic and geomorphologic maps and geographic information system (GIS) techniques have been used. The results of cross-tabulation between these maps and well yield data allowed groundwater prospective parameters in a fractured-bedrock aquifer. These prospective parameters are the base for the favorability analysis whose principle is based on the knowledge-driven method. The mutticriteria analysis (weighted linear combination) was carried out to give a groundwater favorabitity map, because the prospective parameters have different weights of importance and different classes of each parameter. The groundwater favorability map was tested by cross-tabulation with new well yield data and spring occurrence. The wells with the highest values of productivity, as well as all the springs occurrence are situated in the excellent and good favorabitity mapped areas. It shows good coherence between the prospective parameters and the well yield and the importance of GIS techniques for definition of target areas for detail study and wells location. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Charter tourism as a product: a sociological analysis of agency in the experience economy In recent years charter tourism as a convenient and cost-effective mode of travelling has been declining. This may be related to dominating societal ideals promoting self-actualization, individual exploration and spontaneity. However, not much is known about the development of ideals and practices among charter tourists. By use of ethnographic fieldwork methodology, including pre-departure and post-travel telephone interviews, this exploratory study investigated a group of Danish charter tourists travelling to Gran Canaria. Results show that the charter tourists were active in navigating between a series of central dilemmas posed by the consumption of a mass product in an individualized societal context, thereby shaping their experiences to form a desirable tourist product.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the main problems with Artificial Neural Networks (ANNs) is that their results are not intuitively clear. For example, commonly used hidden neurons with sigmoid activation function can approximate any continuous function, including linear functions, but the coefficients (weights) of this approximation are rather meaningless. To address this problem, current paper presents a novel kind of a neural network that uses transfer functions of various complexities in contrast to mono-transfer functions used in sigmoid and hyperbolic tangent networks. The presence of transfer functions of various complexities in a Mixed Transfer Functions Artificial Neural Network (MTFANN) allow easy conversion of the full model into user-friendly equation format (similar to that of linear regression) without any pruning or simplification of the model. At the same time, MTFANN maintains similar generalization ability to mono-transfer function networks in a global optimization context. The performance and knowledge extraction of MTFANN were evaluated on a realistic simulation of the Puma 560 robot arm and compared to sigmoid, hyperbolic tangent, linear and sinusoidal networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Theoretical advances in modelling aggregation of information produced a wide range of aggregation operators, applicable to almost every practical problem. The most important classes of aggregation operators include triangular norms, uninorms, generalised means and OWA operators.
With such a variety, an important practical problem has emerged: how to fit the parameters/ weights of these families of aggregation operators to observed data? How to estimate quantitatively whether a given class of operators is suitable as a model in a given practical setting? Aggregation operators are rather special classes of functions, and thus they require specialised regression techniques, which would enforce important theoretical properties, like commutativity or associativity. My presentation will address this issue in detail, and will discuss various regression methods applicable specifically to t-norms, uninorms and generalised means. I will also demonstrate software implementing these regression techniques, which would allow practitioners to paste their data and obtain optimal parameters of the chosen family of operators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The daily energy imbalance gap associated with the current population weight gain in the obesity epidemic is relatively small. However, the substantially higher body weights of populations that have accumulated over several years are associated with a substantially higher total energy expenditure (TEE) and total energy intake (TEI), or energy flux (EnFlux = TEE = TEI).
Objective: The objective was to develop an equation relating EnFlux to body weight in adults for estimating the rise in EnFlux associated with the obesity epidemic.
Design: Multicenter, cross-sectional data for TEE from doubly labeled water studies in 1399 adults aged 5.9 ± 18.8 y (mean ± SD) were analyzed in linear regression models with natural log (ln) weight as the dependent variable and ln EnFlux as the independent variable, adjusted for height, age, and sex. These equations were compared with those for children and applied to population trends in weight gain.
Results: ln EnFlux was positively related to ln weight (β = 0.71; 95% CI: 0.66, 0.76; R2 = 0.52), adjusted for height, age, and sex. This slope was significantly steeper than that previously described for children (β = 0.45; 95% CI: 0.38, 0.51).
Conclusions: This relation suggests that substantial increases in TEI have driven the increases in body weight over the past 3 decades. Adults have a higher proportional weight gain than children for the same proportional increase in energy intake, mostly because of a higher fat content of the weight being gained. The obesity epidemic will not be reversed without large reductions in energy intake, increases in physical activity, or both.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional Failure Mode and Effect Analysis (FMEA) adopts the Risk Priority Number (RPN) ranking model to evaluate failure risks, to rank failures, as well as to prioritize actions. Although this approach is simple, it suffers from several shortcomings. In this paper, we investigate a number of fuzzy inference techniques for determining the RPN scores, in an attempt to overcome the weaknesses associated with the traditional RPN model. The main objective is to examine the possibility of using fuzzy rule interpolation and reduction techniques to design new fuzzy RPN models. The performance of the fuzzy RPN models is evaluated using a real-world case study pertaining to the test handler process in a semiconductor manufacturing plant. The FMEA procedure for the test handler is performed, and a fuzzy RPN model is developed. In addition, improvement to the fuzzy RPN model is proposed by refining the weights of the fuzzy production rules, hence a new weighted fuzzy RPN model. The ability of the weighted fuzzy RPN model in failure risk evaluation with a reduced rule base is also demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a novel adaptive network, which agglomerates a procedure based on the fuzzy min-max clustering method, a supervised ART (Adaptive Resonance Theory) neural network, and a constructive conflict-resolving algorithm, for pattern classification. The proposed classifier is a fusion of the ordering algorithm, Fuzzy ARTMAP (FAM) and the Dynamic Decay Adjustment (DDA) algorithm. The network, called Ordered FAMDDA, inherits the benefits of the trio, viz . an ability to identify a fixed order of training pattern presentation for good generalisation; stable and incrementally learning architecture; and dynamic width adjustment of the weights of hidden nodes of conflicting classes. Classification performance of the Ordered FAMDDA is assessed using two benchmark datasets. The performances are analysed and compared with those from FAM and Ordered FAM. The results indicate that the Ordered FAMDDA classifier performs at least as good as the mentioned networks. The proposed Ordered FAMDDA network is then applied to a condition monitoring problem in a power generation station. The process under scrutiny is the Circulating Water (CW) system, with prime attention to condition monitoring of the heat transfer efficiency of the condensers. The results and their implications are analysed and discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a new robust single-hidden layer feedforward network (SLFN)-based pattern classifier is developed. It is shown that the frequency spectrums of the desired feature vectors can be specified in terms of the discrete Fourier transform (DFT) technique. The input weights of the SLFN are then optimized with the regularization theory such that the error between the frequency components of the desired feature vectors and the ones of the feature vectors extracted from the outputs of the hidden layer is minimized. For the linearly separable input patterns, the hidden layer of the SLFN plays the role of removing the effects of the disturbance from the noisy input data and providing the linearly separable feature vectors for the accurate classification. However, for the nonlinearly separable input patterns, the hidden layer is capable of assigning the DFTs of all feature vectors to the desired positions in the frequencydomain such that the separability of all nonlinearly separable patterns are maximized. In addition, the output weights of the SLFN are also optimally designed so that both the empirical and the structural risks are well balanced and minimized in a noisy environment. Two simulation examples are presented to show the excellent performance and effectiveness of the proposed classification scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Taking the uncertainty existing in edge weights of networks into consideration, finding shortest path in such fuzzy weighted networks has been widely studied in various practical applications. In this paper, an amoeboid algorithm is proposed, combing fuzzy sets theory with a path finding model inspired by an amoeboid organism, Physarum polycephalum. With the help of fuzzy numbers, uncertainty is well represented and handled in our algorithm. What's more, biological intelligence of Physarum polycephalum has been incorporate into the algorithm. A numerical example on a transportation network is demonstrated to show the efficiency and flexibility of our proposed amoeboid algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Empirical Mode Decomposition (EMD) method is a commonly used method for solving the problem of single channel blind source separation (SCBSS) in signal processing. However, the mixing vector of SCBSS, which is the base of the EMD method, has not yet been effectively constructed. The mixing vector reflects the weights of original signal sources that form the single channel blind signal source. In this paper, we propose a novel method to construct a mixing vector for a single channel blind signal source to approximate the actual mixing vector in terms of keeping the same ratios between signal weights. The constructed mixing vector can be used to improve signal separations. Our method incorporates the adaptive filter, least square method, EMD method and signal source samples to construct the mixing vector. Experimental tests using audio signal evaluations were conducted and the results indicated that our method can improve the similar values of sources energy ratio from 0.2644 to 0.8366. This kind of recognition is very important in weak signal detection.