964 resultados para Challenging problems
Resumo:
Wireless network technologies, such as IEEE 802.11 based wireless local area networks (WLANs), have been adopted in wireless networked control systems (WNCS) for real-time applications. Distributed real-time control requires satisfaction of (soft) real-time performance from the underlying networks for delivery of real-time traffic. However, IEEE 802.11 networks are not designed for WNCS applications. They neither inherently provide quality-of-service (QoS) support, nor explicitly consider the characteristics of the real-time traffic on networked control systems (NCS), i.e., periodic round-trip traffic. Therefore, the adoption of 802.11 networks in real-time WNCSs causes challenging problems for network design and performance analysis. Theoretical methodologies are yet to be developed for computing the best achievable WNCS network performance under the constraints of real-time control requirements. Focusing on IEEE 802.11 distributed coordination function (DCF) based WNCSs, this paper analyses several important NCS network performance indices, such as throughput capacity, round trip time and packet loss ratio under the periodic round trip traffic pattern, a unique feature of typical NCSs. Considering periodic round trip traffic, an analytical model based on Markov chain theory is developed for deriving these performance indices under a critical real-time traffic condition, at which the real-time performance constraints are marginally satisfied. Case studies are also carried out to validate the theoretical development.
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.
Resumo:
Musculoskeletal injuries are the most common reason for operative procedures in severely injured patients and are major determinants of functional outcomes. In this paper, we summarise advances and future directions for management of multiply injured patients with major musculoskeletal trauma. Improved understanding of fracture healing has created new possibilities for management of particularly challenging problems, such as delayed union and non union of fractures and large bone defects. Optimum timing of major orthopaedic interventions is guided by increased knowledge about the immune response after injury. Individual treatment should be guided by trading off the benefits of early definitive skeletal stabilisation, and the potentially life-threatening risks of systemic complications such as fat embolism, acute lung injury, and multiple organ failure. New methods for measurement of fracture healing and function and quality of life outcomes pave the way for landmark trials that will guide the future management of musculoskeletal injuries.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
Philosophical inquiry in the teaching and learning of mathematics has received continued, albeit limited, attention over many years (e.g., Daniel, 2000; English, 1994; Lafortune, Daniel, Fallascio, & Schleider, 2000; Kennedy, 2012a). The rich contributions these communities can offer school mathematics, however, have not received the deserved recognition, especially from the mathematics education community. This is a perplexing situation given the close relationship between the two disciplines and their shared values for empowering students to solve a range of challenging problems, often unanticipated, and often requiring broadened reasoning. In this article, I first present my understanding of philosophical inquiry as it pertains to the mathematics classroom, taking into consideration the significant work that has been undertaken on socio-political contexts in mathematics education (e.g., Skovsmose & Greer, 2012). I then consider one approach to advancing philosophical inquiry in the mathematics classroom, namely, through modelling activities that require interpretation, questioning, and multiple approaches to solution. The design of these problem activities, set within life-based contexts, provides an ideal vehicle for stimulating philosophical inquiry.
Resumo:
For the past few years, research works on the topic of secure outsourcing of cryptographic computations has drawn significant attention from academics in security and cryptology disciplines as well as information security practitioners. One main reason for this interest is their application for resource constrained devices such as RFID tags. While there has been significant progress in this domain since Hohenberger and Lysyanskaya have provided formal security notions for secure computation delegation, there are some interesting challenges that need to be solved that can be useful towards a wider deployment of cryptographic protocols that enable secure outsourcing of cryptographic computations. This position paper brings out these challenging problems with RFID technology as the use case together with our ideas, where applicable, that can provide a direction towards solving the problems.
Resumo:
An increasing amount of people seek health advice on the web using search engines; this poses challenging problems for current search technologies. In this paper we report an initial study of the effectiveness of current search engines in retrieving relevant information for diagnostic medical circumlocutory queries, i.e., queries that are issued by people seeking information about their health condition using a description of the symptoms they observes (e.g. hives all over body) rather than the medical term (e.g. urticaria). This type of queries frequently happens when people are unfamiliar with a domain or language and they are common among health information seekers attempting to self-diagnose or self-treat themselves. Our analysis reveals that current search engines are not equipped to effectively satisfy such information needs; this can have potential harmful outcomes on people’s health. Our results advocate for more research in developing information retrieval methods to support such complex information needs.
Resumo:
The light distribution in the disks of many galaxies is ‘lopsided’ with a spatial extent much larger along one half of a galaxy than the other, as seen in M101. Recent observations show that the stellar disk in a typical spiral galaxy is significantly lopsided, indicating asymmetry in the disk mass distribution. The mean amplitude of lopsidedness is 0.1, measured as the Fourier amplitude of the m=1 component normalized to the average value. Thus, lopsidedness is common, and hence it is important to understand its origin and dynamics. This is a new and exciting area in galactic structure and dynamics, in contrast to the topic of bars and two-armed spirals (m=2) which has been extensively studied in the literature. Lopsidedness is ubiquitous and occurs in a variety of settings and tracers. It is seen in both stars and gas, in the outer disk and the central region, in the field and the group galaxies. The lopsided amplitude is higher by a factor of two for galaxies in a group. The lopsidedness has a strong impact on the dynamics of the galaxy, its evolution, the star formation in it, and on the growth of the central black hole and on the nuclear fuelling. We present here an overview of the observations that measure the lopsided distribution, as well as the theoretical progress made so far to understand its origin and properties. The physical mechanisms studied for its origin include tidal encounters, gas accretion and a global gravitational instability. The related open, challenging problems in this emerging area are discussed.
Resumo:
To remain competitive, many agricultural systems are now being run along business lines. Systems methodologies are being incorporated, and here evolutionary computation is a valuable tool for identifying more profitable or sustainable solutions. However, agricultural models typically pose some of the more challenging problems for optimisation. This chapter outlines these problems, and then presents a series of three case studies demonstrating how they can be overcome in practice. Firstly, increasingly complex models of Australian livestock enterprises show that evolutionary computation is the only viable optimisation method for these large and difficult problems. On-going research is taking a notably efficient and robust variant, differential evolution, out into real-world systems. Next, models of cropping systems in Australia demonstrate the challenge of dealing with competing objectives, namely maximising farm profit whilst minimising resource degradation. Pareto methods are used to illustrate this trade-off, and these results have proved to be most useful for farm managers in this industry. Finally, land-use planning in the Netherlands demonstrates the size and spatial complexity of real-world problems. Here, GIS-based optimisation techniques are integrated with Pareto methods, producing better solutions which were acceptable to the competing organizations. These three studies all show that evolutionary computation remains the only feasible method for the optimisation of large, complex agricultural problems. An extra benefit is that the resultant population of candidate solutions illustrates trade-offs, and this leads to more informed discussions and better education of the industry decision-makers.
Resumo:
We present an introductory overview of several challenging problems in the statistical characterization of turbulence. We provide examples from fluid turbulence in three and two dimensions, from the turbulent advection of passive scalars, turbulence in the one-dimensional Burgers equation, and fluid turbulence in the presence of polymer additives.
Resumo:
Delay and disruption tolerant networks (DTNs) are computer networks where round trip delays and error rates are high and disconnections frequent. Examples of these extreme networks are space communications, sensor networks, connecting rural villages to the Internet and even interconnecting commodity portable wireless devices and mobile phones. Basic elements of delay tolerant networks are a store-and-forward message transfer resembling traditional mail delivery, an opportunistic and intermittent routing, and an extensible cross-region resource naming service. Individual nodes of the network take an active part in routing the traffic and provide in-network data storage for application data that flows through the network. Application architecture for delay tolerant networks differs also from those used in traditional networks. It has become feasible to design applications that are network-aware and opportunistic, taking an advantage of different network connection speeds and capabilities. This might change some of the basic paradigms of network application design. DTN protocols will also support in designing applications which depend on processes to be persistent over reboots and power failures. DTN protocols could also be applicable to traditional networks in cases where high tolerance to delays or errors would be desired. It is apparent that challenged networks also challenge the traditional strictly layered model of network application design. This thesis provides an extensive introduction to delay tolerant networking concepts and applications. Most attention is given to challenging problems of routing and application architecture. Finally, future prospects of DTN applications and implementations are envisioned through recent research results and an interview with an active researcher of DTN networks.
Resumo:
This paper presents two simple simulation and modelling tools designed to aid in the safety assessment required for unmanned aircraft operations within unsegregated airspace. First, a fast pair-wise encounter generator is derived to simulate the See and Avoid environment. The utility of the encounter generator is demonstrated through the development of a hybrid database and a statistical performance evaluation of an autonomous See and Avoid decision and control strategy. Second, an unmanned aircraft mission generator is derived to help visualise the impact of multiple persistent unmanned operations on existing air traffic. The utility of the mission generator is demonstrated through an example analysis of a mixed airspace environment using real traffic data in Australia. These simulation and modelling approaches constitute a useful and extensible set of analysis tools, that can be leveraged to help explore some of the more fundamental and challenging problems facing civilian unmanned aircraft system integration.
Resumo:
In 1974, the Russian physicist Vitaly Ginzburg wrote a book entitled `Key Problems of Physics and Astrophysics' in which he presented a selection of important and challenging problems along with speculations on what the future holds. The selection had a broad range, was highly personalized, and was aimed at the general scientist, for whom it made very interesting reading
Leak Detection In Pressure Tubes Of A Pressurized Heavy-Water Reactor By Acoustic-Emission Technique
Resumo:
Leak detection in the fuel channels is one of the challenging problems during the in-service inspection (ISI) of Pressurised Heavy Water Reactors (PHWRs). In this paper, the use of an acoustic emission (AE) technique together with AE signal analysis is described, to detect a leak that was ncountered in one (or more) of the 306 fuel channels of the Madras Atomic Power Station (PHWR), Unit I. The paper describes the problems encountered during the ISI, the experimental methods adopted and the results obtained. Results obtained using acoustic emission signal analysis are compared with those obtained from other leak detection methods used in such cases.
Resumo:
An attempt is made to present some challenging problems (mainly to the technically minded researchers) in the development of computational models for certain (visual) processes which are executed with, apparently, deceptive ease by the human visual system. However, in the interest of simplicity (and with a nonmathematical audience in mind), the presentation is almost completely devoid of mathematical formalism. Some of the findings in biological vision are presented in order to provoke some approaches to their computational models, The development of ideas is not complete, and the vast literature on biological and computational vision cannot be reviewed here. A related but rather specific aspect of computational vision (namely, detection of edges) has been discussed by Zucker, who brings out some of the difficulties experienced in the classical approaches.Space limitations here preclude any detailed analysis of even the elementary aspects of information processing in biological vision, However, the main purpose of the present paper is to highlight some of the fascinating problems in the frontier area of modelling mathematically the human vision system.