964 resultados para Large detector-systems performance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision-based place recognition involves recognising familiar places despite changes in environmental conditions or camera viewpoint (pose). Existing training-free methods exhibit excellent invariance to either of these challenges, but not both simultaneously. In this paper, we present a technique for condition-invariant place recognition across large lateral platform pose variance for vehicles or robots travelling along routes. Our approach combines sideways facing cameras with a new multi-scale image comparison technique that generates synthetic views for input into the condition-invariant Sequence Matching Across Route Traversals (SMART) algorithm. We evaluate the system’s performance on multi-lane roads in two different environments across day-night cycles. In the extreme case of day-night place recognition across the entire width of a four-lane-plus-median-strip highway, we demonstrate performance of up to 44% recall at 100% precision, where current state-of-the-art fails.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compensation systems are an essential tool to link corporate goals such as customer orientation with individual and organisational performance. While some authors demonstrate the positive effects of incorporating nonfinancial measures into the compensation system empirically, companies have encountered problems after linking pay to customer satisfaction. We argue that reasons for this can be attributed to the measurement of customer satisfaction as well as to the missing link between customer satisfaction and customer retention and profitability in theses cases. Hence, there is a strong need for the development of an holistic reward and performance measurement model enabling an organisation to identify cause-and-effect relationships when linking rewards to nonfinancial performance measures. We present a conceptual framework of a success chain driven reward system that enables organisations to systematically derive a customer-oriented reward strategy. In the context of performance evaluation, we propose to rely on integrated and multidimensional measurement methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Close to one half of the LHC events are expected to be due to elastic or inelastic diffractive scattering. Still, predictions based on extrapolations of experimental data at lower energies differ by large factors in estimating the relative rate of diffractive event categories at the LHC energies. By identifying diffractive events, detailed studies on proton structure can be carried out. The combined forward physics objects: rapidity gaps, forward multiplicity and transverse energy flows can be used to efficiently classify proton-proton collisions. Data samples recorded by the forward detectors, with a simple extension, will allow first estimates of the single diffractive (SD), double diffractive (DD), central diffractive (CD), and non-diffractive (ND) cross sections. The approach, which uses the measurement of inelastic activity in forward and central detector systems, is complementary to the detection and measurement of leading beam-like protons. In this investigation, three different multivariate analysis approaches are assessed in classifying forward physics processes at the LHC. It is shown that with gene expression programming, neural networks and support vector machines, diffraction can be efficiently identified within a large sample of simulated proton-proton scattering events. The event characteristics are visualized by using the self-organizing map algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Access control is an important component in the security of communication systems. While cryptography has rightfully been a significant component in the design of large scale communication systems, its relation to access control, especially its complementarity, has not often been brought out in full. With the wide availability of SELinux, a comprehensive model of access control has all the more become important. In many large scale systems, access control and trust management have become important components in the design. In survivable systems, models of group communication systems may have to be integrated with access control models. In this paper, we discuss the problem of integrating various formalisms often encountered in large scale communication systems, especially in connection with dynamic access control policies as well as trust management

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low-complexity near-optimal detection of large-MIMO signals has attracted recent research. Recently, we proposed a local neighborhood search algorithm, namely reactive tabu search (RTS) algorithm, as well as a factor-graph based belief propagation (BP) algorithm for low-complexity large-MIMO detection. The motivation for the present work arises from the following two observations on the above two algorithms: i) Although RTS achieved close to optimal performance for 4-QAM in large dimensions, significant performance improvement was still possible for higher-order QAM (e.g., 16-, 64-QAM). ii) BP also achieved near-optimal performance for large dimensions, but only for {±1} alphabet. In this paper, we improve the large-MIMO detection performance of higher-order QAM signals by using a hybrid algorithm that employs RTS and BP. In particular, motivated by the observation that when a detection error occurs at the RTS output, the least significant bits (LSB) of the symbols are mostly in error, we propose to first reconstruct and cancel the interference due to bits other than LSBs at the RTS output and feed the interference cancelled received signal to the BP algorithm to improve the reliability of the LSBs. The output of the BP is then fed back to RTS for the next iteration. Simulation results show that the proposed algorithm performs better than the RTS algorithm, and semi-definite relaxation (SDR) and Gaussian tree approximation (GTA) algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With ever increasing demand for electric energy, additional generation and associated transmission facilities has to be planned and executed. In order to augment existing transmission facilities, proper planning and selective decisions are to be made whereas keeping in mind the interests of several parties who are directly or indirectly involved. Common trend is to plan optimal generation expansion over the planning period in order to meet the projected demand with minimum cost capacity addition along with a pre-specified reliability margin. Generation expansion at certain locations need new transmission network which involves serious problems such as getting right of way, environmental clearance etc. In this study, an approach to the citing of additional generation facilities in a given system with minimum or no expansion in the transmission facility is attempted using the network connectivity and the concept of electrical distance for projected load demand. The proposed approach is suitable for large interconnected systems with multiple utilities. Sample illustration on real life system is presented in order to show how this approach improves the overall performance on the operation of the system with specified performance parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalized spatial modulation (GSM) is a relatively new modulation scheme for multi-antenna wireless communications. It is quite attractive because of its ability to work with less number of transmit RF chains compared to traditional spatial multiplexing (V-BLAST system). In this paper, we show that, by using an optimum combination of number of transmit antennas (N-t) and number of transmit RF chains (N-rf), GSM can achieve better throughput and/or bit error rate (BER) than spatial multiplexing. First, we quantify the percentage savings in the number of transmit RF chains as well as the percentage increase in the rate achieved in GSM compared to spatial multiplexing; 18.75% savings in number of RF chains and 9.375% increase in rate are possible with 16 transmit antennas and 4-QAM modulation. A bottleneck, however, is the complexity of maximum-likelihood (ML) detection of GSM signals, particularly in large MIMO systems where the number of antennas is large. We address this detection complexity issue next. Specifically, we propose a Gibbs sampling based algorithm suited to detect GSM signals. The proposed algorithm yields impressive BER performance and complexity results. For the same spectral efficiency and number of transmit RF chains, GSM with the proposed detection algorithm achieves better performance than spatial multiplexing with ML detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During its 1990 operation, 2 large RF systems were available on JET. The Ion Cyclotron Resonance Heating (ICRH) system was equipped with new beryllium screens and with feedback matching systems. Specific impurities generated by ICRH were reduced to negligible levels even in the most stringent H-mode conditions. A maximum power of 22 MW was coupled to L-mode plasmas. High quality H-modes (tau-E greater-than-or-equal-to 2.5 tau-EG) were achieved using dipole phasing. A new high confinement mode was discovered. It combines the properties of the H-mode regime to the low central diffusivities obtained by pellet injection. A value of n(d) tau-E T(i) = 7.8 x 10(20) m-3 s keV was obtained in this mode with T(e) approximately T(i) approximately 11 keV. In the L-mode regime, a regime, a record (140 kW) D-He-3 fusion power was generated with 10 - 14 MW of ICRH at the He-3 cyclotron frequency. Experiments were performed with the prototype launcher of the Lower Hybrid Current Drive (LHCD) systems with coupled power up to 1.6 MW with current drive efficiencies up to < n(e) > R I(CD)/P = 0.4 x 10(20) m-2 A/W. Fast electrons are driven by LHCD to tail temperatures of 100 keV with a hollow radial profile. Paradoxically, LHCD induces central heating particularly in combination with ICRH. Finally we present the first observations of the synergistic acceleration of fast electrons by Transit Time Magnetic Pumping (TTMP) (from ICRH) and Electron Landau Damping (ELD) (from LHCD). The synergism generates TTMP current drive even without phasing the ICRH antennae.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate simulation of quantum dynamics in complex systems poses a fundamental theoretical challenge with immediate application to problems in biological catalysis, charge transfer, and solar energy conversion. The varied length- and timescales that characterize these kinds of processes necessitate development of novel simulation methodology that can both accurately evolve the coupled quantum and classical degrees of freedom and also be easily applicable to large, complex systems. In the following dissertation, the problems of quantum dynamics in complex systems are explored through direct simulation using path-integral methods as well as application of state-of-the-art analytical rate theories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The differential energy spectra of cosmic-ray protons and He nuclei have been measured at energies up to 315 MeV/nucleon using balloon- and satellite-borne instruments. These spectra are presented for solar quiet times for the years 1966 through 1970. The data analysis is verified by extensive accelerator calibrations of the detector systems and by calculations and measurements of the production of secondary protons in the atmosphere.

The spectra of protons and He nuclei in this energy range are dominated by the solar modulation of the local interstellar spectra. The transport equation governing this process includes as parameters the solar-wind velocity, V, and a diffusion coefficient, K(r,R), which is assumed to be a scalar function of heliocentric radius, r, and magnetic rigidity, R. The interstellar spectra, jD, enter as boundary conditions on the solutions to the transport equation. Solutions to the transport equation have been calculated for a broad range of assumed values for K(r,R) and jD and have been compared with the measured spectra.

It is found that the solutions may be characterized in terms of a dimensionless parameter, ψ(r,R) = r V dr'/(K(r',R). The amount of modulation is roughly proportional to ψ. At high energies or far from the Sun, where the modulation is weak, the solution is determined primarily by the value of ψ (and the interstellar spectrum) and is not sensitive to the radial dependence of the diffusion coefficient. At low energies and for small r, where the effects of adiabatic deceleration are found to be large, the spectra are largely determined by the radial dependence of the diffusion coefficient and are not very sensitive to the magnitude of ψ or to the interstellar spectra. This lack of sensitivity to jD implies that the shape of the spectra at Earth cannot be used to determine the interstellar intensities at low energies.

Values of ψ determined from electron data were used to calculate the spectra of protons and He nuclei near Earth. Interstellar spectra of the form jD α (W - 0.25m)-2.65 for both protons and He nuclei were found to yield the best fits to the measured spectra for these values of ψ, where W is the total energy and m is the rest energy. A simple model for the diffusion coefficient was used in which the radial and rigidity dependence are separable and K is independent of radius inside a modulation region which has a boundary at a distance D. Good agreement was found between the measured and calculated spectra for the years 1965 through 1968, using typical boundary distances of 2.7 and 6.1 A.U. The proton spectra observed in 1969 and 1970 were flatter than in previous years. This flattening could be explained in part by an increase in D, but also seemed to require that a noticeable fraction of the observed protons at energies as high at 50 to 100 MeV be attributed to quiet-time solar emission. The turnup in the spectra at low energies observed in all years was also attributed to solar emission. The diffusion coefficient used to fit the 1965 spectra is in reasonable agreement with that determined from the power spectra of the interplanetary magnetic field (Jokipii and Coleman, 1968). We find a factor of roughly 3 increase in ψ from 1965 to 1970, corresponding to the roughly order of magnitude decrease in the proton intensity at 250 MeV. The change in ψ might be attributed to a decrease in the diffusion coefficient, or, if the diffusion coefficient is essentially unchanged over that period (Mathews et al., 1971), might be attributed to an increase in the boundary distance, D.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.