452 resultados para 280399 Computer Software not elsewhere classified


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genetic algorithms (GAs) are known to locate the global optimal solution provided sufficient population and/or generation is used. Practically, a near-optimal satisfactory result can be found by Gas with a limited number of generations. In wireless communications, the exhaustive searching approach is widely applied to many techniques, such as maximum likelihood decoding (MLD) and distance spectrum (DS) techniques. The complexity of the exhaustive searching approach in the MLD or the DS technique is exponential in the number of transmit antennas and the size of the signal constellation for the multiple-input multiple-output (MIMO) communication systems. If a large number of antennas and a large size of signal constellations, e.g. PSK and QAM, are employed in the MIMO systems, the exhaustive searching approach becomes impractical and time consuming. In this paper, the GAs are applied to the MLD and DS techniques to provide a near-optimal performance with a reduced computational complexity for the MIMO systems. Two different GA-based efficient searching approaches are proposed for the MLD and DS techniques, respectively. The first proposed approach is based on a GA with sharing function method, which is employed to locate the multiple solutions of the distance spectrum for the Space-time Trellis Coded Orthogonal Frequency Division Multiplexing (STTC-OFDM) systems. The second approach is the GA-based MLD that attempts to find the closest point to the transmitted signal. The proposed approach can return a satisfactory result with a good initial signal vector provided to the GA. Through simulation results, it is shown that the proposed GA-based efficient searching approaches can achieve near-optimal performance, but with a lower searching complexity comparing with the original MLD and DS techniques for the MIMO systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Firms began outsourcing information system functions soon after the inception of electronic computing. Extant research has concentrated on large organizations and large-valued outsourcing contracts from a variety of different industries. Smaller-sized firms are inherently different from their large counterparts. These differences between small and large firms could lead to different information technology/information system (IT/IS) items being outsourced and different outsourcing agreements governing these arrangements. This research explores and examines the outsourcing practices of very small through to medium-sized manufacturing organizations. The in-depth case studies not only explored the extent to which different firms engaged in outsourcing but also the nuances of their outsourcing arrangements. The results reveal that all six firms tended to outsource the same sorts of functions. Some definite differences existed, however, in the strategies adopted in relation to the functions they outsourced. These differences arose for a variety of reasons, including size, locality, and holding company influences. The very small and small manufacturing firms tended to make outsourcing purchases on an ad hoc basis with little reliance on legal advice. In contrast, the medium-sized firms often used a more planned initiative and sought legal advice more often. Interestingly, not one of the six firms outsourced any of their transaction processing. These findings now give very small, small-, and medium-sized manufacturing firms the opportunity to compare their practices against other firms of similar size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we studied vapor-liquid equilibria (VLE) and adsorption of ethylene on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers. Simple models of a one-center Lennard-Jones (LJ) potential and a two-center united atom (UA)-LJ potential are investigated to study the impact of the choice of potential models in the description of VLE and adsorption behavior. Here, we used a Monte Carlo simulation method with grand canonical Monte Carlo (GCMC) and Gibbs ensemble Monte Carlo ensembles. The one-center potential model cannot describe adequately the VLE over the practical range of temperature from the triple point to the critical point. On the other hand, the two-center potential model (Wick et al. J. Phys. Chem. B 2000, 104, 8008-8016) performs well in the description of VLE (saturated vapor and liquid densities and vapor pressure) over the wide range of temperature. This UA-LJ model is then used in the study of adsorption of ethylene on graphitized thermal carbon black and in slit pores. Agreement between the GCMC simulation results and the experimental data on graphitized thermal carbon black for moderate temperatures is excellent, demonstrating that the potential of the GCMC method and the proper choice of potential model are essential to investigate adsorption. For slit pores of various sizes, we have found that the behavior of ethylene exhibits a number of features that are not manifested in the study of spherical LJ particles. In particular, the singlet density distribution versus distance across the pore and the angle between the molecular axis and the z direction provide rich information about the way molecules arrange themselves when the pore width is varied. Such an arrangement has been found to be very sensitive to the pore width.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enterprise systems interoperability (ESI) is an important topic for business currently. This situation is evidenced, at least in part, by the number and extent of potential candidate protocols for such process interoperation, viz., ebXML, BPML, BPEL, and WSCI. Wide-ranging support for each of these candidate standards already exists. However, despite broad acceptance, a sound theoretical evaluation of these approaches has not yet been provided. We use the Bunge-Wand-Weber (BWW) models, in particular, the representation model, to provide the basis for such a theoretical evaluation. We, and other researchers, have shown the usefulness of the representation model for analyzing, evaluating, and engineering techniques in the areas of traditional and structured systems analysis, object-oriented modeling, and process modeling. In this work, we address the question, what are the potential semantic weaknesses of using ebXML alone for process interoperation between enterprise systems? We find that users will lack important implementation information because of representational deficiencies; due to ontological redundancy, the complexity of the specification is unnecessarily increased; and, users of the specification will have to bring in extra-model knowledge to understand constructs in the specification due to instances of ontological excess.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigate the difference between the adsorption of spherical molecule argon (at 87.3 K) and the flexible normal butane (at an equivalent temperature of 150 K) in carbon slit pores. These temperatures are equivalent in the sense that they have the same relative distances between their respective triple points and critical points. Higher equivalent temperatures are also studied (122.67 K for argon and 303 K for n-butane) to investigate the effects of temperature on the 2D-transition in adsorbed density. The Grand Canonical Monte Carlo simulation is used to study the adsorption of these two model adsorbates. Beside the longer computation times involved in the computation of n-butane adsorption, n-butane exhibits many interesting behaviors such as: (i) the onset of adsorption occurs sooner (in terms of relative pressure), (ii) the hysteresis for 2D- and 3D-transitions is larger, (iii) liquid-solid transition is not possible, (iv) 2D-transition occurs for n-butane at 150 K while it does not happen for argon except for pores that accommodate two layers of molecules, (v) the maximum pore density is about four times less than that of argon and (vi) the sieving pore width is slightly larger than that for argon. Finally another feature obtained from the Grand Canonical Monte Carlo (GCMC) simulation is the configurational arrangement of molecules in pores. For spherical argon, the arrangement is rather well structured, while for n-butane the arrangement depends very much on the pore size. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geospatio-temporal conceptual models provide a mechanism to explicitly represent geospatial and temporal aspects of applications. Such models, which focus on both what and when/where, need to be more expressive than conventional conceptual models (e.g., the ER model), which primarily focus on what is important for a given application. In this study, we view conceptual schema comprehension of geospatio-temporal data semantics in terms of matching the external problem representation (that is, the conceptual schema) to the problem-solving task (that is, syntactic and semantic comprehension tasks), an argument based on the theory of cognitive fit. Our theory suggests that an external problem representation that matches the problem solver's internal task representation will enhance performance, for example, in comprehending such schemas. To assess performance on geospatio-temporal schema comprehension tasks, we conducted a laboratory experiment using two semantically identical conceptual schemas, one of which mapped closely to the internal task representation while the other did not. As expected, we found that the geospatio-temporal conceptual schema that corresponded to the internal representation of the task enhanced the accuracy of schema comprehension; comprehension time was equivalent for both. Cognitive fit between the internal representation of the task and conceptual schemas with geospatio-temporal annotations was, therefore, manifested in accuracy of schema comprehension and not in time for problem solution. Our findings suggest that the annotated schemas facilitate understanding of data semantics represented on the schema.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores potential for the RAMpage memory hierarchy to use a microkernel with a small memory footprint, in a specialized cache-speed static RAM (tightly-coupled memory, TCM). Dreamy memory is DRAM kept in low-power mode, unless referenced. Simulations show that a small microkernel suits RAMpage well, in that it achieves significantly better speed and energy gains than a standard hierarchy from adding TCM. RAMpage, in its best 128KB L2 case, gained 11% speed using TCM, and reduced energy 14%. Equivalent conventional hierarchy gains were under 1%. While 1MB L2 was significantly faster against lower-energy cases for the smaller L2, the larger SRAM's energy does not justify the speed gain. Using a 128KB L2 cache in a conventional architecture resulted in a best-case overall run time of 2.58s, compared with the best dreamy mode run time (RAMpage without context switches on misses) of 3.34s, a speed penalty of 29%. Energy in the fastest 128KB L2 case was 2.18J vs. 1.50J, a reduction of 31%. The same RAMpage configuration without dreamy mode took 2.83s as simulated, and used 2.39J, an acceptable trade-off (penalty under 10%) for being able to switch easily to a lower-energy mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the obstacles to improved security of the Internet is ad hoc development of technologies with different design goals and different security goals. This paper proposes reconceptualizing the Internet as a secure distributed system, focusing specifically on the application layer. The notion is to redesign specific functionality, based on principles discovered in research on distributed systems in the decades since the initial development of the Internet. Because of the problems in retrofitting new technology across millions of clients and servers, any options with prospects of success must support backward compatibility. This paper outlines a possible new architecture for internet-based mail which would replace existing protocols by a more secure framework. To maintain backward compatibility, initial implementation could offer a web browser-based front end but the longer-term approach would be to implement the system using appropriate models of replication. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide an abstract command language for real-time programs and outline how a partial correctness semantics can be used to compute execution times. The notions of a timed command, refinement of a timed command, the command traversal condition, and the worst-case and best-case execution time of a command are formally introduced and investigated with the help of an underlying weakest liberal precondition semantics. The central result is a theory for the computation of worst-case and best-case execution times from the underlying semantics based on supremum and infimum calculations. The framework is applied to the analysis of a message transmitter program and its implementation. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

All signals that appear to be periodic have some sort of variability from period to period regardless of how stable they appear to be in a data plot. A true sinusoidal time series is a deterministic function of time that never changes and thus has zero bandwidth around the sinusoid's frequency. A zero bandwidth is impossible in nature since all signals have some intrinsic variability over time. Deterministic sinusoids are used to model cycles as a mathematical convenience. Hinich [IEEE J. Oceanic Eng. 25 (2) (2000) 256-261] introduced a parametric statistical model, called the randomly modulated periodicity (RMP) that allows one to capture the intrinsic variability of a cycle. As with a deterministic periodic signal the RMP can have a number of harmonics. The likelihood ratio test for this model when the amplitudes and phases are known is given in [M.J. Hinich, Signal Processing 83 (2003) 1349-13521. A method for detecting a RMP whose amplitudes and phases are unknown random process plus a stationary noise process is addressed in this paper. The only assumption on the additive noise is that it has finite dependence and finite moments. Using simulations based on a simple RMP model we show a case where the new method can detect the signal when the signal is not detectable in a standard waterfall spectrograrn display. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.