958 resultados para Computer Science, Hardware


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The very nature of computer science with its constant changes forces those who wish to follow to adapt and react quickly. Large companies invest in being up to date in order to generate revenue and stay active on the market. Universities, on the other hand, need to imply same practices of staying up to date with industry needs in order to produce industry ready engineers. By interviewing former students, now engineers in the industry, and current university staff this thesis aims to learn if there is space for enhancing the education through different lecturing approaches and/or curriculum adaptation and development. In order to address these concerns a qualitative research has been conducted, focusing on data collection obtained through semi-structured live world interviews. The method used follows the seven stages of research interviewing introduced by Kvale and focuses on collecting and preparing relevant data for analysis. The collected data is transcribed, refined, and further on analyzed in the Findings and analysis chapter. The focus of analyzing was answering the three research questions; learning how higher education impacts a Computer Science and Informatics Engineers job, how to better undergo the transition from studies to working in the industry and how to develop a curriculum that helps support the previous two. Unaltered quoted extracts are presented and individually analyzed. To paint a better picture a theme-wise analysis is presented summing valuable themes that were repeated throughout the interviewing phase. The findings obtained imply that there are several factors directly influencing the quality of education. From the student side, it mostly concerns expectation and dedication involving studies, and from the university side it is commitment to the curriculum development process. Due to the time and resource limitations this research provides findings conducted on a narrowed scope, although it can serve as a great foundation for further development; possibly as a PhD research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In testing from a Finite State Machine (FSM), the generation of test suites which guarantee full fault detection, known as complete test suites, has been a long-standing research topic. In this paper, we present conditions that are sufficient for a test suite to be complete. We demonstrate that the existing conditions are special cases of the proposed ones. An algorithm that checks whether a given test suite is complete is given. The experimental results show that the algorithm can be used for relatively large FSMs and test suites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to define the size of validity windows. In this brief, a new adaptive Bayesian inspired algorithm is presented for learning drifting concepts. It uses the analogy of validity windows in an adaptive Bayesian way to incorporate changes in the data distribution over time. We apply a theoretical approach based on information geometry to the classification problem and measure its performance in simulations. The uncertainty about the appropriate size of the memory windows is dealt with in a Bayesian manner by integrating over the distribution of the adaptive window size. Thus, the posterior distribution of the weights may develop algebraic tails. The learning algorithm results from tracking the mean and variance of the posterior distribution of the weights. It was found that the algebraic tails of this posterior distribution give the learning algorithm the ability to cope with an evolving environment by permitting the escape from local traps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relevant results for (sub-)distribution functions related to parallel systems are discussed. The reverse hazard rate is defined using the product integral. Consequently, the restriction of absolute continuity for the involved distributions can be relaxed. The only restriction is that the sets of discontinuity points of the parallel distributions have to be disjointed. Nonparametric Bayesian estimators of all survival (sub-)distribution functions are derived. Dual to the series systems that use minimum life times as observations, the parallel systems record the maximum life times. Dirichlet multivariate processes forming a class of prior distributions are considered for the nonparametric Bayesian estimation of the component distribution functions, and the system reliability. For illustration, two striking numerical examples are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recommendations based on off-line data processing has attracted increasing attention from both research communities and IT industries. The recommendation techniques could be used to explore huge volumes of data, identify the items that users probably like, and translate the research results into real-world applications, etc. This paper surveys the recent progress in the research of recommendations based on off-line data processing, with emphasis on new techniques (such as context-based recommendation, temporal recommendation), and new features (such as serendipitous recommendation). Finally, we outline some existing challenges for future research.<br />

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radio-frequency identification (RFID) is seen as one of the requirements for the implementation of the Internet-of-Things (IoT). However, an RFID system has to be equipped with a holistic security framework for a secure and scalable operation. Although much work has been done to provide privacy and anonymity, little focus has been given to performance, scalability and customizability issues to support robust implementation of IoT. Also, existing protocols suffer from a number of deficiencies such as insecure or inefficient identification techniques, throughput delay and inadaptability. In this paper, we propose a novel identification technique based on a hybrid approach (group-based approach and collaborative approach) and security check handoff (SCH) for RFID systems with mobility. The proposed protocol provides customizability and adaptability as well as ensuring the secure and scalable deployment of an RFID system to support a robust distributed structure such as the IoT. The protocol has an extra fold of protection against malware using an incorporated malware detection technique. We evaluated the protocol using a randomness battery test and the results show that the protocol offers better security, scalability and customizability than the existing protocols. &copy; 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a significant milestone in the data dissemination of wireless sensor networks (WSNs), the comb-needle (CN) model was developed to dynamically balance the sensor data pushing and pulling during hybrid data dissemination. Unfortunately, the hybrid push-pull data dissemination strategy may overload some sensor nodes and form the hotspots that consume energy significantly. This usually leads to the collapse of the network at a very early stage. In the past decade, although many energy-aware dynamic data dissemination methods have been proposed to alleviate the hotspots issue, the block characteristic of sensor nodes has been overlooked and how to offload traffic from hot blocks with low energy through long-distance hybrid dissemination remains an open problem. In this paper, we developed a block-aware data dissemination model to balance the inter-block energy and eliminate the spreading of intra-block hotspots. Through the clustering mechanism based on geography and energy, &quot;similar&quot; large-scale sensor nodes can be efficiently grouped into specific blocks to form the global block information (GBI). Based on GBI, the long-distance block-cross hybrid algorithms are further developed by effectively aggregating inter-block and intra-block data disseminations. Extensive experimental results demonstrate the capability and the efficiency of the proposed approach. &copy; 2014 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the multicast lifetime capacity of energy-limited wireless ad hoc networks using directional multibeam antennas by formulating and solving the corresponding optimization problem. In such networks, each node is equipped with a practical smart antenna array that can be configured to support multiple beams with adjustable orientation and beamwidth. The special case of this optimization problem in networks with single beams have been extensively studied and shown to be NP-hard. In this paper, we provide a globally optimal solution to this problem by developing a general MILP formulation that can apply to various configurable antenna models, many of which are not supported by the existing formulations. In order to study the multicast lifetime capacity of large-scale networks, we also propose an efficient heuristic algorithm with guaranteed theoretical performance. In particular, we provide a sufficient condition to determine if its performance reaches optimum based on the analysis of its approximation ratio. These results are validated by experiments as well. The multicast lifetime capacity is then quantitatively studied by evaluating the proposed exact and heuristic algorithms using simulations. The experimental results also show that using two-beam antennas can exploit most lifetime capacity of the networks for multicast communications. &copy; 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive radio improves spectrum efficiency and mitigates spectrum scarcity by allowing cognitive users to opportunistically access idle chunks of the spectrum owned by licensed users. In long-term spectrum leasing markets, secondary network operators make a decision about how much spectrum is optimal to fulfill their users' data transmission requirements. We study this optimization problem in multiple channel scenarios. Under the constrains of expected user admission rate and quality of service, we model the secondary network into a dynamic data transportation system. In this system, the spectrum accesses of both primary users and secondary users are in accordance with stochastic processes, respectively. The main metrics of quality of service we are concerned with include user admission rate, average transmission delay and stability of the delay. To quantify the relationship between spectrum provisioning and quality of service, we propose an approximate analytical model. We use the model to estimate the lower and upper bounds of the optimal amount of the spectrum. The distance between the bounds is relatively narrow. In addition, we design a simple algorithm to compute the optimum by using the bounds. We conduct numerical simulations on a slotted multiple channel dynamic spectrum access network model. Simulation results demonstrate the preciseness of the proposed model. Our work sheds light on the design of game and auction based dynamic spectrum sharing mechanisms in cognitive radio networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Static detection of malware variants plays an important role in system security and control flow has been shown as an effective characteristic that represents polymorphic malware. In our research, we propose a similarity search of malware to detect these variants using novel distance metrics. We describe a malware signature by the set of control flowgraphs the malware contains. We use a distance metric based on the distance between feature vectors of string-based signatures. The feature vector is a decomposition of the set of graphs into either fixed size k-subgraphs, or q-gram strings of the high-level source after decompilation. We use this distance metric to perform pre-filtering. We also propose a more effective but less computationally efficient distance metric based on the minimum matching distance. The minimum matching distance uses the string edit distances between programs' decompiled flowgraphs, and the linear sum assignment problem to construct a minimum sum weight matching between two sets of graphs. We implement the distance metrics in a complete malware variant detection system. The evaluation shows that our approach is highly effective in terms of a limited false positive rate and our system detects more malware variants when compared to the detection rates of other algorithms. &copy; 2013 IEEE.