985 resultados para Automatic selection


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we explore simultaneous geometry design and material selection for statically determinate trusses by posing it as a continuous optimization problem. The underlying principles of our approach are structural optimization and Ashby’s procedure for material selection from a database. For simplicity and ease of initial implementation, only static loads are considered in this work with the intent of maximum stiffness, minimum weight/cost, and safety against failure. Safety of tensile and compression members in the truss is treated differently to prevent yield and buckling failures, respectively. Geometry variables such as lengths and orientations of members are taken to be the design variables in an assumed layout. Areas of cross-section of the members are determined to satisfy the failure constraints in each member. Along the lines of Ashby’s material indices, a new design index is derived for trusses. The design index helps in choosing the most suitable material for any geometry of the truss. Using the design index, both the design space and the material database are searched simultaneously using gradient-based optimization algorithms. The important feature of our approach is that the formulated optimization problem is continuous, although the material selection from a database is an inherently discrete problem. A few illustrative examples are included. It is observed that the method is capable of determining the optimal topology in addition to optimal geometry when the assumed layout contains more links than are necessary for optimality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relay selection for cooperative communications promises significant performance improvements, and is, therefore, attracting considerable attention. While several criteria have been proposed for selecting one or more relays, distributed mechanisms that perform the selection have received relatively less attention. In this paper, we develop a novel, yet simple, asymptotic analysis of a splitting-based multiple access selection algorithm to find the single best relay. The analysis leads to simpler and alternate expressions for the average number of slots required to find the best user. By introducing a new contention load' parameter, the analysis shows that the parameter settings used in the existing literature can be improved upon. New and simple bounds are also derived. Furthermore, we propose a new algorithm that addresses the general problem of selecting the best Q >= 1 relays, and analyze and optimize it. Even for a large number of relays, the scalable algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. We also propose a new and simple scheme for the practically relevant case of discrete metrics. Altogether, our results develop a unifying perspective about the general problem of distributed selection in cooperative systems and several other multi-node systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past two decades, the selection, optimization, and compensation (SOC) model has been applied in the work context to investigate antecedents and outcomes of employees' use of action regulation strategies. We systematically review, meta-analyze, and critically discuss the literature on SOC strategy use at work and outline directions for future research and practice. The systematic review illustrates the breadth of constructs that have been studied in relation to SOC strategy use, and that SOC strategy use can mediate and moderate relationships of person and contextual antecedents with work outcomes. Results of the meta-analysis show that SOC strategy use is positively related to age (rc = .04), job autonomy (rc = .17), self-reported job performance (rc = .23), non-self-reported job performance (rc = .21), job satisfaction (rc = .25), and job engagement (rc = .38), whereas SOC strategy use is not significantly related to job tenure, job demands, and job strain. Overall, our findings underline the importance of the SOC model for the work context, and they also suggest that its measurement and reporting standards need to be improved to become a reliable guide for future research and organizational practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The business value of information technology (IT) is realized through the continuous use of IT subsequent to users’ adoption. Understanding post-adoptive IT usage is useful in realizing potential IT business value. Most previous research on post-adoptive IT usage, however, dismisses the unintentional and unconscious aspects of usage behavior. This paper advances understanding of the unintentional, unconscious, and thereby automatic usage of IT features during the post-adoptive stage. Drawing from Social Psychology literature, we argue human behaviors can be triggered by environmental cues and directed by the person’s mental goals, thereby operating without a person’s consciousness and intentional will. On this basis, we theorize the role of a user’s innovativeness goal, as the desired state of an act to innovate, in directing the user’s unintentional, unconscious, and automatic post-adoptive IT feature usage behavior. To test the hypothesized mechanisms, a human experiment employing a priming technique, is described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a concept for a collision avoidance system for ships, which is based on model predictive control. A finite set of alternative control behaviors are generated by varying two parameters: offsets to the guidance course angle commanded to the autopilot and changes to the propulsion command ranging from nominal speed to full reverse. Using simulated predictions of the trajectories of the obstacles and ship, compliance with the Convention on the International Regulations for Preventing Collisions at Sea and collision hazards associated with each of the alternative control behaviors are evaluated on a finite prediction horizon, and the optimal control behavior is selected. Robustness to sensing error, predicted obstacle behavior, and environmental conditions can be ensured by evaluating multiple scenarios for each control behavior. The method is conceptually and computationally simple and yet quite versatile as it can account for the dynamics of the ship, the dynamics of the steering and propulsion system, forces due to wind and ocean current, and any number of obstacles. Simulations show that the method is effective and can manage complex scenarios with multiple dynamic obstacles and uncertainty associated with sensors and predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Guo and Nixon proposed a feature selection method based on maximizing I(x; Y),the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x; Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x; Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x; Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x; Y).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports a measurement of the cross section for the pair production of top quarks in ppbar collisions at sqrt(s) = 1.96 TeV at the Fermilab Tevatron. The data was collected from the CDF II detector in a set of runs with a total integrated luminosity of 1.1 fb^{-1}. The cross section is measured in the dilepton channel, the subset of ttbar events in which both top quarks decay through t -> Wb -> l nu b where l = e, mu, or tau. The lepton pair is reconstructed as one identified electron or muon and one isolated track. The use of an isolated track to identify the second lepton increases the ttbar acceptance, particularly for the case in which one W decays as W -> tau nu. The purity of the sample may be further improved at the cost of a reduction in the number of signal events, by requiring an identified b-jet. We present the results of measurements performed with and without the request of an identified b-jet. The former is the first published CDF result for which a b-jet requirement is added to the dilepton selection. In the CDF data there are 129 pretag lepton + track candidate events, of which 69 are tagged. With the tagging information, the sample is divided into tagged and untagged sub-samples, and a combined cross section is calculated by maximizing a likelihood. The result is sigma_{ttbar} = 9.6 +/- 1.2 (stat.) -0.5 +0.6 (sys.) +/- 0.6 (lum.) pb, assuming a branching ratio of BR(W -> ell nu) = 10.8% and a top mass of m_t = 175 GeV/c^2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop an alternate characterization of the statistical distribution of the inter-cell interference power observed in the uplink of CDMA systems. We show that the lognormal distribution better matches the cumulative distribution and complementary cumulative distribution functions of the uplink interference than the conventionally assumed Gaussian distribution and variants based on it. This is in spite of the fact that many users together contribute to uplink interference, with the number of users and their locations both being random. Our observations hold even in the presence of power control and cell selection, which have hitherto been used to justify the Gaussian distribution approximation. The parameters of the lognormal are obtained by matching moments, for which detailed analytical expressions that incorporate wireless propagation, cellular layout, power control, and cell selection parameters are developed. The moment-matched lognormal model, while not perfect, is an order of magnitude better in modeling the interference power distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common and practical paradigm in cooperative communications is the use of a dynamically selected 'best' relay to decode and forward information from a source to a destination. Such a system consists of two core phases: a relay selection phase, in which the system expends resources to select the best relay, and a data transmission phase, in which it uses the selected relay to forward data to the destination. In this paper, we study and optimize the trade-off between the selection and data transmission phase durations. We derive closed-form expressions for the overall throughput of a non-adaptive system that includes the selection phase overhead, and then optimize the selection and data transmission phase durations. Corresponding results are also derived for an adaptive system in which the relays can vary their transmission rates. Our results show that the optimal selection phase overhead can be significant even for fast selection algorithms. Furthermore, the optimal selection phase duration depends on the number of relays and whether adaptation is used.