935 resultados para ElGamal, CZK, Multiple discrete logarithm assumption, Extended linear algebra


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes to extend the minimax disparity to determine the ordered weighted averaging (OWA) model based on linear programming. It introduces the minimax disparity approach between any distinct pairs of the weights and uses the duality of linear programming to prove the feasibility of the extended OWA operator weights model. The paper finishes with an open problem. © 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Driven by the assumption that multidisciplinarity contributes positively to team outcomes teams are often deliberately staffed such that they comprise multiple disciplines. However, the diversity literature suggests that multidisciplinarity may not always benefit a team. This study departs from the notion of a linear, positive effect of multidisciplinarity and tests its contingency on the quality of team processes. It was assumed that multidisciplinarity only contributes to team outcomes if the quality of team processes is high. This hypothesis was tested in two independent samples of health care workers (N = 66 and N = 95 teams), using team innovation as the outcome variable. Results support the hypothesis for the quality of innovation, rather than the number of innovations introduced by the teams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a submitted query to multiple search engines finding relevant results is an important task. This paper formulates the problem of aggregation and ranking of multiple search engines results in the form of a minimax linear programming model. Besides the novel application, this study detects the most relevant information among a return set of ranked lists of documents retrieved by distinct search engines. Furthermore, two numerical examples aree used to illustrate the usefulness of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This study aimed to explore methods of assessing interactions between neuronal sources using MEG beamformers. However, beamformer methodology is based on the assumption of no linear long-term source interdependencies [VanVeen BD, vanDrongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 1997;44:867-80; Robinson SE, Vrba J. Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Recent advances in Biomagnetism. Sendai: Tohoku University Press; 1999. p. 302-5]. Although such long-term correlations are not efficient and should not be anticipated in a healthy brain [Friston KJ. The labile brain. I. Neuronal transients and nonlinear coupling. Philos Trans R Soc Lond B Biol Sci 2000;355:215-36], transient correlations seem to underlie functional cortical coordination [Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron 1999;49-65; Rodriguez E, George N, Lachaux J, Martinerie J, Renault B, Varela F. Perception's shadow: long-distance synchronization of human brain activity. Nature 1999;397:430-3; Bressler SL, Kelso J. Cortical coordination dynamics and cognition. Trends Cogn Sci 2001;5:26-36]. Methods: Two periodic sources were simulated and the effects of transient source correlation on the spatial and temporal performance of the MEG beamformer were examined. Subsequently, the interdependencies of the reconstructed sources were investigated using coherence and phase synchronization analysis based on Mutual Information. Finally, two interacting nonlinear systems served as neuronal sources and their phase interdependencies were studied under realistic measurement conditions. Results: Both the spatial and the temporal beamformer source reconstructions were accurate as long as the transient source correlation did not exceed 30-40 percent of the duration of beamformer analysis. In addition, the interdependencies of periodic sources were preserved by the beamformer and phase synchronization of interacting nonlinear sources could be detected. Conclusions: MEG beamformer methods in conjunction with analysis of source interdependencies could provide accurate spatial and temporal descriptions of interactions between linear and nonlinear neuronal sources. Significance: The proposed methods can be used for the study of interactions between neuronal sources. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gestalt grouping rules imply a process or mechanism for grouping together local features of an object into a perceptual whole. Several psychophysical experiments have been interpreted as evidence for constrained interactions between nearby spatial filter elements and this has led to the hypothesis that element linking might be mediated by these interactions. A common tacit assumption is that these interactions result in response modulation which disturbs a local contrast code. We addressed this possibility by performing contrast discrimination experiments using two-dimensional arrays of multiple Gabor patches arranged either (i) vertically, (ii) in circles (coherent conditions), or (iii) randomly (incoherent condition), as well as for a single Gabor patch. In each condition, contrast increments were applied to either the entire test stimulus (experiment 1) or a single patch whose position was cued (experiment 2). In experiment 3, the texture stimuli were reduced to a single contour by displaying only the central vertical strip. Performance was better for the multiple-patch conditions than for the single-patch condition, but whether the multiple-patch stimulus was coherent or not had no systematic effect on the results in any of the experiments. We conclude that constrained local interactions do not interfere with a local contrast code for our suprathreshold stimuli, suggesting that, in general, this is not the way in which element linking is achieved. The possibility that interactions are involved in enhancing the detectability of contour elements at threshold remains unchallenged by our experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance. © 2007 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the angular distributions for elastic and. inelastic scattering of fast neutrons in fusion .reactor materials have been studied. Lithium and lead material are likely to be common components of fusion reactor wall configuration design. The measurements were performed using an associated particle time-of- flight technique. The 14 and 14.44 Mev neutrons were produced by the T(d,n} 4He reaction with deuterons being accelerated in a 150kev SAMES type J accelerator at ASTON and in.the 3. Mev DYNAMITRON at the Joint Radiation Centre, Birmingham respectively. The associated alpha-particles and fast. neutrons were detected.by means of a plastic scintillator mounted on a fast focused photomultiplier tube. The samples used were extended flat plates of thicknesses up to 0.9 mean-free-path for Lithium and 1.562 mean-free-path for Lead. The differential elastic scattering cross-sections were measured for 14 Mev neutrons for various thicknesses of Lithium and Lead in the angular range from zero to; 90º. In addition, the angular distributions of elastically scattered 14,.44 Mev .neutrons from Lithium samples were studied in the same angular range. Inelastic scattering to the 4.63 Mev state in 7Li and the 2.6 Mev state, and 4.1 Mev state in 208Pb have:been :measured.The results are compared to ENDF/B-IV data files and to previous measurements. For the Lead samples the differential neutron scattering:cross-sections for discrete 3 Mev ranges and the angular distributions were measured. The increase in effective cross-section due to multiple scattering effects,as the sample thickness increased:was found to be predicted by the empirical .relation ....... A good fit to the exoerimental data was obtained using the universal constant............ The differential elastic scattering cross-section data for thin samples of Lithium and Lead were analyzed in terms of optical model calculations using the. computer code. RAROMP. Parameter search procedures produced good fits to the·cross-sections. For the case of thick samples of Lithium and Lead, the measured angular distributions of :the scattered neutrons were compared to the predictions of the continuous slowing down model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A distinct feature of several recent models of contrast masking is that detecting mechanisms are divisively inhibited by a broadly tuned ‘gain pool’ of narrow-band spatial pattern mechanisms. The contrast gain control provided by this ‘cross-channel’ architecture achieves contrast normalisation of early pattern mechanisms, which is important for keeping them within the non-saturating part of their biological operating characteristic. These models superseded earlier ‘within-channel’ models, which had supposed that masking arose from direct stimulation of the detecting mechanism by the mask. To reveal the extent of masking, I measured the levels produced with large ranges of pattern spatial relationships that have not been explored before. Substantial interactions between channels tuned to different orientations and spatial frequencies were found. Differences in the masking levels produced with single and multiple component mask patterns provided insights into the summation rules within the gain pool. A widely used cross-channel masking model was tested on these data and was found to perform poorly. The model was developed and a version in which linear summation was allowed between all components within the gain pool but with the exception of the self-suppressing route typically provided the best account of the data. Subsequently, an adaptation paradigm was used to probe the processes underlying pooled responses in masking. This delivered less insight into the pooling than the other studies and areas were identified that require investigation for a new unifying model of masking and adaptation. In further experiments, levels of cross-channel masking were found to be greatly influenced by the spatio-temporal tuning of the channels involved. Old masking experiments and ideas relying on within-channel models were re-elevated in terms of contemporary cross-channel models (e.g. estimations of channel bandwidths from orientation masking functions) and this led to different conclusions than those originally arrived at. The investigation of effects with spatio-temporally superimposed patterns is focussed upon throughout this work, though it is shown how these enquiries might be extended to investigate effects across spatial and temporal position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In previous statnotes, the application of correlation and regression methods to the analysis of two variables (X,Y) was described. These methods can be used to determine whether there is a linear relationship between the two variables, whether the relationship is positive or negative, to test the degree of significance of the linear relationship, and to obtain an equation relating Y to X. This Statnote extends the methods of linear correlation and regression to situations where there are two or more X variables, i.e., 'multiple linear regression’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An inverse problem is considered where the structure of multiple sound-soft planar obstacles is to be determined given the direction of the incoming acoustic field and knowledge of the corresponding total field on a curve located outside the obstacles. A local uniqueness result is given for this inverse problem suggesting that the reconstruction can be achieved by a single incident wave. A numerical procedure based on the concept of the topological derivative of an associated cost functional is used to produce images of the obstacles. No a priori assumption about the number of obstacles present is needed. Numerical results are included showing that accurate reconstructions can be obtained and that the proposed method is capable of finding both the shapes and the number of obstacles with one or a few incident waves.