863 resultados para multi-attribute analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis entitled Analysis of Some Stochastic Models in Inventories and Queues. This thesis is devoted to the study of some stochastic models in Inventories and Queues which are physically realizable, though complex. It contains a detailed analysis of the basic stochastic processes underlying these models. In this thesis, (s,S) inventory systems with nonidentically distributed interarrival demand times and random lead times, state dependent demands, varying ordering levels and perishable commodities with exponential life times have been studied. The queueing system of the type Ek/Ga,b/l with server vacations, service systems with single and batch services, queueing system with phase type arrival and service processes and finite capacity M/G/l queue when server going for vacation after serving a random number of customers are also analysed. The analogy between the queueing systems and inventory systems could be exploited in solving certain models. In vacation models, one important result is the stochastic decomposition property of the system size or waiting time. One can think of extending this to the transient case. In inventory theory, one can extend the present study to the case of multi-item, multi-echelon problems. The study of perishable inventory problem when the commodities have a general life time distribution would be a quite interesting problem. The analogy between the queueing systems and inventory systems could be exploited in solving certain models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. A multi-standard design often involves extensive system level analysis and architectural partitioning, typically requiring extensive calculations. In this research, a decimation filter design tool for wireless communication standards consisting of GSM, WCDMA, WLANa, WLANb, WLANg and WiMAX is developed in MATLAB® using GUIDE environment for visual analysis. The user can select a required wireless communication standard, and obtain the corresponding multistage decimation filter implementation using this toolbox. The toolbox helps the user or design engineer to perform a quick design and analysis of decimation filter for multiple standards without doing extensive calculation of the underlying methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent boom in wireless communication industry, especially in the area of cellular telephony and wireless data communication, has led to the increased demand for multi band antennas. In such applications the issues to be addressed are, wide bandwidth and gain, while striving for miniature geometry. A dual frequency configuration useful in GSM1800 and Blue tooth, is one that operates with similar properties, both in terms of reflection and radiation characteristics, in the two bands of interest. Dual frequency operations can be realized by exciting the Microstrip Patch Antenna (MPA) using a single feed [1] or dual feed [2]. In this paper, Conformal FDTD[3] method with Perfect Magnetic Conductor (PMC) applied along the plane of symmetry [4] is used to study the characteristics of an Octagonal MPA. The theoretical results are compared against the experimental and IE3D™ simulated results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relativistic multi-configuration Dirac Fock (MCDF) wavefunctions coupled to good angular momentum J have been calculated for low lying states of Ba I and Ba II. These wavefunctions are compared with semiempirical ones derived from experimental atomic energy levels. It is found that significantly better agreement is obtained when close configurations are included in the MCDF wavefunctions. Calculations of the electronic part of the field isotope shift lead to very good agreement with electronic factors derived from experimental data. Furthermore, the slopes of the lines in a King plot analysis of many of the optical lines are predicted accurately by these calculations. However, the MCDF wavefunctions seem not to be of sufficient accuracy to give agreement with the experimental magnetic dipole and electric quadrupole hyperfine structure constants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relativistic multi-configuration Dirac-Fock wavefunctions, coupled to good angular momentum J, have been calculated for low lying states of Ba I and Ba II. The resulting electronic factors show good agreement with data derived from recent high-resolution laser spectroscopy experiments and results from a comparison of muonic and optical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implications between attributes can represent knowledge about objects in a specified context. This knowledge representation is especially useful when it is not possible to list all specified objects. Attribute exploration is a tool of formal concept analysis that supports the acquisition of this knowledge. For a specified context this interactive procedure determines a miminal list of valid implications between attributes of this context together with a list of objects which are counterexamples for all implications not valid in the context. This paper describes how the exploration can be modified such that it determines a mimimal set of implications that fills the gap between previously given implications (called background implications) and all valid implications. The list of implications can be simplified further if exceptions are allowed for the implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of major feed resources was conducted in four crop-livestock mixed farming systems of central southern Ethiopia, with 90 farmers, selected using multi-stage purposive and random sampling methods. Discussions were held with focused groups and key informants for vernacular name identification of feed, followed by feed sampling to analyse chemical composition (CP, ADF and NDF), in-vitro dry matter digestibility (IVDMD), and correlate with indigenous technical knowledge (ITK). Native pastures, crop residues (CR) and multi-purpose trees (MPT) are the major feed resources, demonstrated great variations in seasonality, chemical composition and IVDMD. The average CP, NDF and IVDMD values for grasses were 83.8 (ranged: 62.9–190), 619 (ranged: 357–877) and 572 (ranged: 317–743) g kg^(−1) DM, respectively. Likewise, the average CP, NDF and IVDMD for CR were 58 (ranged: 20–90), 760 (ranged: 340–931) and 461 (ranged: 285–637)g kg^(−1) DM, respectively. Generally, the MPT and non-conventional feeds (NCF, Ensete ventricosum and Ipomoea batatas) possessed higher CP (ranged: 155–164 g kg^(−1) DM) and IVDMD values (611–657 g kg^(−1) DM) while lower NDF (331–387 g kg^(−1) DM) and ADF (321–344 g kg^(−1) DM) values. The MPT and NCF were ranked as the best nutritious feeds by ITK while crop residues were the least. This study indicates that there are remarkable variations within and among forage resources in terms of chemical composition. There were also complementarities between ITK and feed laboratory results, and thus the ITK need to be taken into consideration in evaluation of local feed resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excavations on the multi-period settlement at Old Scatness, Shetland have uncovered a number of Iron Age structures with compacted, floor-like layers. Thin section analysis was undertaken in order to investigate and compare the characteristics of these layers. The investigation also draws on earlier analyses of the Iron Age agricultural soil around the settlement and the midden deposits that accumulated within the settlement, to create a 'joined-up' analysis which considers the way material from the settlement was used and then recycled as fertiliser for the fields. Peat was collected from the nearby uplands and was used for fuel and possibly also for flooring. It is suggested that organic-rich floors from the structures were periodically removed and the material was spread onto the fields as fertilisers. More organic-rich material may have been used selectively for fertiliser, while the less organic peat ash was allowed to accumulate in middens. Several of the structures may have functioned as byres, which suggests a prehistoric plaggen system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of linear responsibility analysis is used for a retrospective case study of a private industrial development consisting of an engineering factory and offices. A multi-disciplinary professional practice was used to manage and design the project. The organizational structure adopted on the project is analysed using concepts from systems theory which are included in Walker's theoretical model of the structure of building project organizations (Walker, 1981). This model proposes that the process of buildings provision can be viewed as systems and sub-systems which are differentiated form each other at decision points. Further to this, the sub-systematic analysis of the relationship between the contributors gives a quantitative assessment of the efficiency of the organizational structure used. There was a high level of satisfaction with the completed project and this is reflected by the way in which the organization structure corresponded to the model's proposition. However, the project was subject to string environmental forces which the project organization was not capable of entirely overcoming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modelled El Nino-mean state-seasonal cycle interactions in 23 coupled ocean-atmosphere GCMs, including the recent IPCC AR4 models, are assessed and compared to observations and theory. The models show a clear improvement over previous generations in simulating the tropical Pacific climatology. Systematic biases still include too strong mean and seasonal cycle of trade winds. El Nino amplitude is shown to be an inverse function of the mean trade winds in agreement with the observed shift of 1976 and with theoretical studies. El Nino amplitude is further shown to be an inverse function of the relative strength of the seasonal cycle. When most of the energy is within the seasonal cycle, little is left for inter-annual signals and vice versa. An interannual coupling strength (ICS) is defined and its relation with the modelled El Nino frequency is compared to that predicted by theoretical models. An assessment of the modelled El Nino in term of SST mode (S-mode) or thermocline mode (T-mode) shows that most models are locked into a S-mode and that only a few models exhibit a hybrid mode, like in observations. It is concluded that several basic El Nino-mean state-seasonal cycle relationships proposed by either theory or analysis of observations seem to be reproduced by CGCMs. This is especially true for the amplitude of El Nino and is less clear for its frequency. Most of these relationships, first established for the pre-industrial control simulations, hold for the double and quadruple CO2 stabilized scenarios. The models that exhibit the largest El Nino amplitude change in these greenhouse gas (GHG) increase scenarios are those that exhibit a mode change towards a T-mode (either from S-mode to hybrid or hybrid to T-mode). This follows the observed 1976 climate shift in the tropical Pacific, and supports the-still debated-finding of studies that associated this shift to increased GHGs. In many respects, these models are also among those that best simulate the tropical Pacific climatology (ECHAM5/MPI-OM, GFDL-CM2.0, GFDL-CM2.1, MRI-CGM2.3.2, UKMO-HadCM3). Results from this large subset of models suggest the likelihood of increased El Nino amplitude in a warmer climate, though there is considerable spread of El Nino behaviour among the models and the changes in the subsurface thermocline properties that may be important for El Nino change could not be assessed. There are no clear indications of an El Nino frequency change with increased GHG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.