945 resultados para iterative determinant maximization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an approach for the analysis and design of 765kV/400kV EHV transmission system which is a typical expansion in Indian power grid system, based on the analysis of steady state and transient over voltages. The approach for transmission system design is iterative in nature. The first step involves exhaustive power flow analysis, based on constraints such as right of way, power to be transmitted, power transfer capabilities of lines, existing interconnecting transformer capabilities etc. Acceptable bus voltage profiles and satisfactory equipment loadings during all foreseeable operating conditions for normal and contingency operation are the guiding criteria. Critical operating strategies are also evolved in this initial design phase. With the steady state over voltages obtained, comprehensive dynamic and transient studies are to be carried out including switching over voltages studies. This paper presents steady state and switching transient studies for alternative two typical configurations of 765kV/400 kV systems and the results are compared. Transient studies are carried out to obtain the peak values of 765 kV transmission systems and are compared with the alternative configurations of existing 400 kV systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the on-going design and implementation of a sensor network for agricultural management targeted at resource-poor farmers in India. Our focus on semi-arid regions led us to concentrate on water-related issues. Throughout 2004, we carried out a survey on the information needs of the population living in a cluster of villages in our study area. The results highlighted the potential that environment-related information has for the improvement of farming strategies in the face of highly variable conditions, in particular for risk management strategies (choice of crop varieties, sowing and harvest periods, prevention of pests and diseases, efficient use of irrigation water etc.). This leads us to advocate an original use of Information and Communication Technologies (ICT). We believe our demand-driven approach for the design of appropriate ICT tools that are targeted at the resource-poor to be relatively new. In order to go beyond a pure technocratic approach, we adopted an iterative, participatory methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconstructions in optical tomography involve obtaining the images of absorption and reduced scattering coefficients. The integrated intensity data has greater sensitivity to absorption coefficient variations than scattering coefficient. However, the sensitivity of intensity data to scattering coefficient is not zero. We considered an object with two inhomogeneities (one in absorption and the other in scattering coefficient). The standard iterative reconstruction techniques produced results, which were plagued by cross talk, i.e., the absorption coefficient reconstruction has a false positive corresponding to the location of scattering inhomogeneity, and vice-versa. We present a method to remove cross talk in the reconstruction, by generating a weight matrix and weighting the update vector during the iteration. The weight matrix is created by the following method: we first perform a simple backprojection of the difference between the experimental and corresponding homogeneous intensity data. The built up image has greater weightage towards absorption inhomogeneity than the scattering inhomogeneity and its appropriate inverse is weighted towards the scattering inhomogeneity. These two weight matrices are used as multiplication factors in the update vectors, normalized backprojected image of difference intensity for absorption inhomogeneity and the inverse of the above for the scattering inhomogeneity, during the image reconstruction procedure. We demonstrate through numerical simulations, that cross-talk is fully eliminated through this modified reconstruction procedure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A diffusion/replacement model for new consumer durables designed to be used as a long-term forecasting tool is developed. The model simulates new demand as well as replacement demand over time. The model is called DEMSIM and is built upon a counteractive adoption model specifying the basic forces affecting the adoption behaviour of individual consumers. These forces are the promoting forces and the resisting forces. The promoting forces are further divided into internal and external influences. These influences are operationalized within a multi-segmental diffusion model generating the adoption behaviour of the consumers in each segment as an expected value. This diffusion model is combined with a replacement model built upon the same segmental structure as the diffusion model. This model generates, in turn, the expected replacement behaviour in each segment. To be able to use DEMSIM as a forecasting tool in early stages of a diffusion process estimates of the model parameters are needed as soon as possible after product launch. However, traditional statistical techniques are not very helpful in estimating such parameters in early stages of a diffusion process. To enable early parameter calibration an optimization algorithm is developed by which the main parameters of the diffusion model can be estimated on the basis of very few sales observations. The optimization is carried out in iterative simulation runs. Empirical validations using the optimization algorithm reveal that the diffusion model performs well in early long-term sales forecasts, especially as it comes to the timing of future sales peaks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The hydrophobic effect is widely believed to be an important determinant of protein stability. However, it is difficult to obtain unambiguous experimental estimates of the contribution of the hydrophobic driving force to the overall free energy of folding. Thermodynamic and structural studies of large to small substitutions in proteins are the most direct method of measuring this contribution. We have substituted the buried residue Phe8 in RNase S with alanine, methionine, and norleucine, Binding thermodynamics and structures were characterized by titration calorimetry and crystallography, respectively. The crystal structures of the RNase S F8A, F8M, and F8Nle mutants indicate that the protein tolerates the changes without any main chain adjustments, The correlation of structural and thermodynamic parameters associated with large to small substitutions was analyzed for nine mutants of RNase S as well as 32 additional cavity-containing mutants of T4 lysozyme, human lysozyme, and barnase. Such substitutions were typically found to result in negligible changes in Delta C-p and positive values of both Delta Delta H degrees and aas of folding. Enthalpic effects were dominant, and the sign of Delta Delta S is the opposite of that expected from the hydrophobic effect. Values of Delta Delta G degrees and Delta Delta H degrees correlated better with changes in packing parameters such as residue depth or occluded surface than with the change in accessible surface area upon folding. These results suggest that the loss of packing interactions rather than the hydrophobic effect is a dominant contributor to the observed energetics for large to small substitutions. Hence, estimates of the magnitude of the hydrophobic driving force derived from earlier mutational studies are likely to be significantly in excess of the actual value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We recently introduced the dynamical cluster approximation (DCA), a technique that includes short-ranged dynamical correlations in addition to the local dynamics of the dynamical mean-field approximation while preserving causality. The technique is based on an iterative self-consistency scheme on a finite-size periodic cluster. The dynamical mean-field approximation (exact result) is obtained by taking the cluster to a single site (the thermodynamic limit). Here, we provide details of our method, explicitly show that it is causal, systematic, Phi derivable, and that it becomes conserving as the cluster size increases. We demonstrate the DCA by applying it to a quantum Monte Carlo and exact enumeration study of the two-dimensional Falicov-Kimball model. The resulting spectral functions preserve causality, and the spectra and the charge-density-wave transition temperature converge quickly and systematically to the thermodynamic limit as the cluster size increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-orthogonal space-time block codes (STBC) with large dimensions are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) as well as full transmit diversity. Decoding of non-orthogonal STBCs with large dimensions has been a challenge. In this paper, we present a reactive tabu search (RTS) based algorithm for decoding non-orthogonal STBCs from cyclic division algebras (CDA) having largedimensions. Under i.i.d fading and perfect channel state information at the receiver (CSIR), our simulation results show that RTS based decoding of 12 X 12 STBC from CDA and 4-QAM with 288 real dimensions achieves i) 10(-3) uncoded BER at an SNR of just 0.5 dB away from SISO AWGN performance, and ii) a coded BER performance close to within about 5 dB of the theoretical MIMO capacity, using rate-3/4 turbo code at a spectral efficiency of 18 bps/Hz. RTS is shown to achieve near SISO AWGN performance with less number of dimensions than with LAS algorithm (which we reported recently) at some extra complexity than LAS. We also report good BER performance of RTS when i.i.d fading and perfect CSIR assumptions are relaxed by considering a spatially correlated MIMO channel model, and by using a training based iterative RTS decoding/channel estimation scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA) are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) as well as full transmit diversity. Decoding of non-orthogonal STBCs with hundreds of dimensions has been a challenge. In this paper, we present a probabilistic data association (PDA) based algorithm for decoding non-orthogonal STBCs with large dimensions. Our simulation results show that the proposed PDA-based algorithm achieves near SISO AWGN uncoded BER as well as near-capacity coded BER (within 5 dB of the theoretical capacity) for large non-orthogonal STBCs from CDA. We study the effect of spatial correlation on the BER, and show that the performance loss due to spatial correlation can be alleviated by providing more receive spatial dimensions. We report good BER performance when a training-based iterative decoding/channel estimation is used (instead of assuming perfect channel knowledge) in channels with large coherence times. A comparison of the performances of the PDA algorithm and the likelihood ascent search (LAS) algorithm (reported in our recent work) is also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Silver code has captured a lot of attention in the recent past,because of its nice structure and fast decodability. In their recent paper, Hollanti et al. show that the Silver code forms a subset of the natural order of a particular cyclic division algebra (CDA). In this paper, the algebraic structure of this subset is characterized. It is shown that the Silver code is not an ideal in the natural order but a right ideal generated by two elements in a particular order of this CDA. The exact minimum determinant of the normalized Silver code is computed using the ideal structure of the code. The construction of Silver code is then extended to CDAs over other number fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a Single Network Adaptive Critic (SNAC) based Power System Stabilizer (PSS) for enhancing the small-signal stability of power systems over a wide range of operating conditions. SNAC uses only a single critic neural network instead of the action-critic dual network architecture of typical adaptive critic designs. SNAC eliminates the iterative training loops between the action and critic networks and greatly simplifies the training procedure. The performance of the proposed PSS has been tested on a Single Machine Infinite Bus test system for various system and loading conditions. The proposed stabilizer, which is relatively easier to synthesize, consistently outperformed stabilizers based on conventional lead-lag and linear quadratic regulator designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is substantial evidence of the decreased functional capacity, especially everyday functioning, of people with psychotic disorder in clinical settings, but little research about it in the general population. The aim of the present study was to provide information on the magnitude of functional capacity problems in persons with psychotic disorder compared with the general population. It estimated the prevalence and severity of limitations in vision, mobility, everyday functioning and quality of life of persons with psychotic disorder in the Finnish population and determined the factors affecting them. This study is based on the Health 2000 Survey, which is a nationally representative survey of 8028 Finns aged 30 and older. The psychotic diagnoses of the participants were assessed in the Psychoses of Finland survey, a substudy of Health 2000. The everyday functioning of people with schizophrenia is studied widely, but one important factor, mobility has been neglected. Persons with schizophrenia and other non-affective psychotic disorders, but not affective psychoses had a significantly increased risk of having both self-reported and test-based mobility limitations as well as weak handgrip strength. Schizophrenia was associated independently with mobility limitations even after controlling for lifestyle-related factors and chronic medical conditions. Another significant factor associated with problems in everyday functioning in participants with schizophrenia was reduced visual acuity. Their vision was examined significantly less often during the five years before the visual acuity measurement than the general population. In general, persons with schizophrenia and other non-affective psychotic disorder had significantly more limitations in everyday functioning, deficits in verbal fluency and in memory than the general population. More severe negative symptoms, depression, older age, verbal memory deficits, worse expressive speech and reduced distance vision were associated with limitations in everyday functioning. Of all the psychotic disorders, schizoaffective disorder was associated with the largest losses of quality of life, and bipolar I disorder with equal or smaller losses than schizophrenia. However, the subjective loss of qualify of life associated with psychotic disorders may be smaller than objective disability, which warrants attention. Depressive symptoms were the most important determinant of poor quality of life in all psychotic disorders. In conclusion, subjects with psychotic disorders need regular somatic health monitoring. Also, health care workers should evaluate the overall quality of life and depression of subjects with psychotic disorders in order to provide them with the basic necessities of life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A two-stage iterative algorithm for selecting a subset of a training set of samples for use in a condensed nearest neighbor (CNN) decision rule is introduced. The proposed method uses the concept of mutual nearest neighborhood for selecting samples close to the decision line. The efficacy of the algorithm is brought out by means of an example.