911 resultados para Intelligent load management
Resumo:
In this paper, we study the management and control of service differentiation and guarantee based on enhanced distributed function coordination (EDCF) in IEEE 802.11e wireless LANs. Backoff-based priority schemes are the major mechanism for Quality of Service (QoS) provisioning in EDCF. However, control and management of the backoff-based priority scheme are still challenging problems. We have analysed the impacts of backoff and Inter-frame Space (IFS) parameters of EDCF on saturation throughput and service differentiation. A centralised QoS management and control scheme is proposed. The configuration of backoff parameters and admission control are studied in the management scheme. The special role of access point (AP) and the impact of traffic load are also considered in the scheme. The backoff parameters are adaptively re-configured to increase the levels of bandwidth guarantee and fairness on sharing bandwidth. The proposed management scheme is evaluated by OPNET. Simulation results show the effectiveness of the analytical model based admission control scheme. ©2005 IEEE.
Resumo:
The Article suggests an approach to designing the Human Resources Intellectual Management System in order to increase Human Resources reliability, using the management methods known from the Theory of Management.. The Article examines the realization of the Subsystem of implementing management methods by the number of management procedures, executing the corresponding management method.
Resumo:
This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.
Resumo:
The Article suggests a possible approach to creation of the Intellectual Management System for human resources and personnel (during their professional tasks solving), and that could consider personal characteristics and psychological condition of the human resources as an “unreliable” element. The Article describes some elements of the Intellectual Management System: professional activity model and “unreliable” element (human resources) model.
Resumo:
This article reports on an investigationwith first year undergraduate ProductDesign and Management students within a School of Engineering and Applied Science. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill-formed problem which involved designing a simple bridge to cross a river.They were given a talk on problemsolving and given a rubric to follow, if they chose to do so.They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order tomake assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualize a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
This paper describes a prototype of the intelligent system of the hearing investigation developed by the Tver State Technical University. The problem of automatic diagnostics, considered as the recognition problem of object not completely determined on set of the diseases classes’ descriptions, is discussed. The management strategy of the hearing investigation is proposed.
Resumo:
Increased global uptake of entertainment gaming has the potential to lead to high expectations of engagement and interactivity from users of technology-enhanced learning environments. Blended approaches to implementing game-based learning as part of distance or technology-enhanced education have led to demonstrations of the benefits they might bring, allowing learners to interact with immersive technologies as part of a broader, structured learning experience. In this article, we explore how the integration of a serious game can be extended to a learning content management system (LCMS) to support a blended and holistic approach, described as an 'intuitive-guided' method. Through a case study within the EU-Funded Adaptive Learning via Intuitive/Interactive, Collaborative and Emotional Systems (ALICE) project, a technical integration of a gaming engine with a proprietary LCMS is demonstrated, building upon earlier work and demonstrating how this approach might be realized. In particular, how this method can support an intuitive-guided approach to learning is considered, whereby the learner is given the potential to explore a non-linear environment whilst scaffolding and blending provide guidance ensuring targeted learning objectives are met. Through an evaluation of the developed prototype with 32 students aged 14-16 across two Italian schools, a varied response from learners is observed, coupled with a positive reception from tutors. The study demonstrates that challenges remain in providing high-fidelity content in a classroom environment, particularly as an increasing gap in technology availability between leisure and school times emerges.
Resumo:
Az elmúlt évtizedek felgyorsult technológiai fejlődése komoly kihívásokat jelent mind a cégeknek, mind az egyéneknek. Intézményesített „jövőkezelésre” és ennek menedzselésére van szükség. A szerzők tanulmányukban át kívánják tekinteni a jövőkutatás, a technológiamenedzsment, az innovációmenedzsment és egyéb megközelítések releváns alapjait, viszonyait és lehetséges integrációjukat. Be kívánják mutatni a meghatározó területeket és trendeket. Keresik azokat a menedzsment-alapkérdéseket, tanulságokat és dilemmákat, amelyek minden olyan vállalatnak érdekesek és hasznosak lehetnek, amelyek a fejlődő technológia lehetőségeit szeretnék kiaknázni, vagy csak egyszerűen szeretnének a követhetetlenül gyors fejlődésben talpon maradni. _____________ The fast pace technology development creates a serious challenge both for individuals and for companies. There is a concept which attempts to handle this challenge by “institutional future management”. In their paper the authors survey the relevant concepts of future studies, technology management and other areas, and explore their connections and integration possibilities. They also would like to introduce some key technology trends, and at the same time some basic managerial questions, dilemmas, conclusions which might have importance to those corporations which have to survive in an environment determined by accelerated technology based innovation.
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
Next-generation integrated wireless local area network (WLAN) and 3G cellular networks aim to take advantage of the roaming ability in a cellular network and the high data rate services of a WLAN. To ensure successful implementation of an integrated network, many issues must be carefully addressed, including network architecture design, resource management, quality-of-service (QoS), call admission control (CAC) and mobility management. ^ This dissertation focuses on QoS provisioning, CAC, and the network architecture design in the integration of WLANs and cellular networks. First, a new scheduling algorithm and a call admission control mechanism in IEEE 802.11 WLAN are presented to support multimedia services with QoS provisioning. The proposed scheduling algorithms make use of the idle system time to reduce the average packet loss of realtime (RT) services. The admission control mechanism provides long-term transmission quality for both RT and NRT services by ensuring the packet loss ratio for RT services and the throughput for non-real-time (NRT) services. ^ A joint CAC scheme is proposed to efficiently balance traffic load in the integrated environment. A channel searching and replacement algorithm (CSR) is developed to relieve traffic congestion in the cellular network by using idle channels in the WLAN. The CSR is optimized to minimize the system cost in terms of the blocking probability in the interworking environment. Specifically, it is proved that there exists an optimal admission probability for passive handoffs that minimizes the total system cost. Also, a method of searching the probability is designed based on linear-programming techniques. ^ Finally, a new integration architecture, Hybrid Coupling with Radio Access System (HCRAS), is proposed for lowering the average cost of intersystem communication (IC) and the vertical handoff latency. An analytical model is presented to evaluate the system performance of the HCRAS in terms of the intersystem communication cost function and the handoff cost function. Based on this model, an algorithm is designed to determine the optimal route for each intersystem communication. Additionally, a fast handoff algorithm is developed to reduce the vertical handoff latency.^
Resumo:
A wireless mesh network is a mesh network implemented over a wireless network system such as wireless LANs. Wireless Mesh Networks(WMNs) are promising for numerous applications such as broadband home networking, enterprise networking, transportation systems, health and medical systems, security surveillance systems, etc. Therefore, it has received considerable attention from both industrial and academic researchers. This dissertation explores schemes for resource management and optimization in WMNs by means of network routing and network coding.^ In this dissertation, we propose three optimization schemes. (1) First, a triple-tier optimization scheme is proposed for load balancing objective. The first tier mechanism achieves long-term routing optimization, and the second tier mechanism, using the optimization results obtained from the first tier mechanism, performs the short-term adaptation to deal with the impact of dynamic channel conditions. A greedy sub-channel allocation algorithm is developed as the third tier optimization scheme to further reduce the congestion level in the network. We conduct thorough theoretical analysis to show the correctness of our design and give the properties of our scheme. (2) Then, a Relay-Aided Network Coding scheme called RANC is proposed to improve the performance gain of network coding by exploiting the physical layer multi-rate capability in WMNs. We conduct rigorous analysis to find the design principles and study the tradeoff in the performance gain of RANC. Based on the analytical results, we provide a practical solution by decomposing the original design problem into two sub-problems, flow partition problem and scheduling problem. (3) Lastly, a joint optimization scheme of the routing in the network layer and network coding-aware scheduling in the MAC layer is introduced. We formulate the network optimization problem and exploit the structure of the problem via dual decomposition. We find that the original problem is composed of two problems, routing problem in the network layer and scheduling problem in the MAC layer. These two sub-problems are coupled through the link capacities. We solve the routing problem by two different adaptive routing algorithms. We then provide a distributed coding-aware scheduling algorithm. According to corresponding experiment results, the proposed schemes can significantly improve network performance.^
Resumo:
This study examined the effectiveness of intelligent tutoring system instruction, grounded in John Anderson's ACT theory of cognition, on the achievement and attitude of developmental mathematics students in the community college setting. The quasi-experimental research used a pretest-posttest control group design. The dependent variables were problem solving achievement, overall achievement, and attitude towards mathematics. The independent variable was instructional method. Four intact classes and two instructors participated in the study for one semester. Two classes (n = 35) served as experimental groups; they received six lessons with real-world problems using intelligent tutoring system instruction. The other two classes (n = 24) served as control groups; they received six lessons with real-world problems using traditional instruction including graphing calculator support. It was hypothesized that students taught problem solving using the intelligent tutoring system would achieve more on the dependent variables than students taught without the intelligent tutoring system. Posttest mean scores for one teacher produced a significant difference in overall achievement for the experimental group. The same teacher had higher means, not significantly, for the experimental group in problem solving achievement. The study did not indicate a significant difference in attitude mean scores. It was concluded that using an intelligent tutoring system in problem solving instruction may impact student's overall mathematics achievement and problem solving achievement. Other factors must be considered, such as the teacher's classroom experience, the teacher's experience with the intelligent tutoring system, trained technical support, and trained student support; as well as student learning styles, motivation, and overall mathematics ability.
Resumo:
The outcome of this research is an Intelligent Retrieval System for Conditions of Contract Documents. The objective of the research is to improve the method of retrieving data from a computer version of a construction Conditions of Contract document. SmartDoc, a prototype computer system has been developed for this purpose. The system provides recommendations to aid the user in the process of retrieving clauses from the construction Conditions of Contract document. The prototype system integrates two computer technologies: hypermedia and expert systems. Hypermedia is utilized to provide a dynamic way for retrieving data from the document. Expert systems technology is utilized to build a set of rules that activate the recommendations to aid the user during the process of retrieval of clauses. The rules are based on experts knowledge. The prototype system helps the user retrieve related clauses that are not explicitly cross-referenced but, according to expert experience, are relevant to the topic that the user is interested in.
Resumo:
This paper reports on an investigation with first year undergraduate Product Design and Management students within a School of Engineering. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill formed problem which involved designing a simple bridge to cross a river. They were given a talk on problem solving and given a rubric to follow, if they chose to do so. They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order to make assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualise a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.