949 resultados para Localization open issues
Resumo:
Level crossing crashes have been shown to result in enormous human and financial cost to society. According to the Australian Transport Safety Bureau (ATSB) [5] a total of 632 Railway Level crossing (RLX) collisions, between trains and road vehicles, occurred in Australia between 2001 and June 2009. The cost of RLX collisions runs into the tens of millions of dollars each year in Australia [6]. In addition, loss of life and injury are commonplace in instances where collisions occur. Based on estimates that 40% of rail related fatalities occur at level crossings [12], it is estimated that 142 deaths between 2001 and June 2009 occurred at RLX. The aim of this paper is to (i) summarise crash patterns in Australia, (ii) review existing international ITS interventions to improve level crossing and (iii) highlights open human factors research related issues. Human factors (e.g., driver error, lapses or violations) have been evidenced as a significant contributing factor in RLX collisions, with drivers of road vehicles particularly responsible for many collisions. Unintentional errors have been found to contribute to 46% of RLX collisions [6] and appear to be far more commonplace than deliberate violations. Humans have been found to be inherently inadequate at using the sensory information available to them to facilitate safe decision-making at RLX and tend to underestimate the speed of approaching large objects due to the non-linear increases in perceived size [6]. Collisions resulting from misjudgements of train approach speed and distance are common [20]. Thus, a fundamental goal for improved RLX safety is the provision of sufficient contextual information to road vehicle drivers to facilitate safe decision-making regarding crossing behaviours.
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
Localization is a fundamental task in Cyber-Physical Systems (CPS), where data is tightly coupled with the environment and the location where it is generated. The research literature on localization has reached a critical mass, and several surveys have also emerged. This review paper contributes on the state-of-the-art with the proposal of a new and holistic taxonomy of the fundamental concepts of localization in CPS, based on a comprehensive analysis of previous research works and surveys. The main objective is to pave the way towards a deep understanding of the main localization techniques, and unify their descriptions. Furthermore, this review paper provides a complete overview on the most relevant localization and geolocation techniques. Also, we present the most important metrics for measuring the accuracy of localization approaches, which is meant to be the gap between the real location and its estimate. Finally, we present open issues and research challenges pertaining to localization. We believe that this review paper will represent an important and complete reference of localization techniques in CPS for researchers and practitioners and will provide them with an added value as compared to previous surveys.
Resumo:
In this thesis, one of the current control algorithms for the R744 cycle, which tries tooptimize the performance of the system by two SISO control loops, is compared to acost-effective system with just one actuator. The operation of a key component of thissystem, a two stage orifice expansion valve is examined in a range of typical climateconditions. One alternative control loop for this system, which has been proposed byBehr group, is also scrutinized.The simulation results affirm the preference of using two control-loops instead of oneloop, but refute advantages of the Behr alternate control approach against one-loopcontrol. As far as the economic considerations of the A/C unit are concerned, usinga two-stage orifice expansion valve is desired by the automotive industry, thus basedon the experiment results, an improved logic for control of this system is proposed.In the second part, it is investigated whether the one-actuator control approach isapplicable to a system consisting of two parallel evaporators to allow passengers tocontrol different climate zones. The simulation results show that in the case of usinga two-stage orifice valve for the front evaporator and a fixed expansion valve forthe rear one, a proper distribution of the cooling power between the front and rearcompartment is possible for a broad range of climate conditions.
Resumo:
During recent years, mindfulness-based approaches have been gaining relevance for treatment in clinical populations. Correspondingly, the empirical study of mindfulness has steadily grown; thus, the availability of valid measures of the construct is critically important. This paper gives an overview of the current status in the field of self-report assessment of mindfulness. All eight currently available and validated mindfulness scales (for adults) are evaluated, with a particular focus on their virtues and limitations and on differences among them. It will be argued that none of these scales may be a fully adequate measure of mindfulness, as each of them offers unique advantages but also disadvantages. In particular, none of them seems to provide a comprehensive assessment of all aspects of mindfulness in samples from the general population. Moreover, some scales may be particularly indicated in investigations focusing on specific populations such as clinical samples (Cognitive and Affective Mindfulness Scale, Southampton Mindfulness Questionnaire) or meditators (Freiburg Mindfulness Inventory). Three main open issues are discussed: (1) the coverage of aspects of mindfulness in questionnaires; (2) the nature of the relationships between these aspects; and (3) the validity of self-report measures of mindfulness. These issues should be considered in future developments in the self-report assessment of mindfulness.
Resumo:
This article provides an overview on procedure-related issues and uncertainties in outcomes after transcatheter aortic valve implantation (TAVI). The different access sites and how to select them in an individual patient are discussed. Also, the occurrence and potential predictors of aortic regurgitation (AR) after TAVI are addressed. The different methods to quantify AR are reviewed, and it appears that accurate and reproducible quantification is suboptimal. Complications such as prosthesis-patient mismatch and conduction abnormalities (and need for permanent pacemaker) are discussed, as well as cerebrovascular events, which emphasize the development of optimal anti-coagulative strategies. Finally, recent registries have shown the adoption of TAVI in the real world, but longer follow-up studies are needed to evaluate the outcome (but also prosthesis durability). Additionally, future studies are briefly discussed, which will address the use of TAVI in pure AR and lower-risk patients.
Resumo:
An exponential increase in the use of transcatheter aortic valve implantation (TAVI) in patients with severe aortic stenosis has been witnessed over the recent years. The current article reviews different areas of uncertainty related to patient selection. The use and limitations of risk scores are addressed, followed by an extensive discussion on the value of three-dimensional imaging for prosthesis sizing and the assessment of complex valve anatomy such as degenerated bicuspid valves. The uncertainty about valvular stenosis severity in patients with a mismatch between the transvalvular gradient and the aortic valve area, and how integrated use of echocardiography and computed tomographic imaging may help, is also addressed. Finally, patients referred for TAVI may have concomitant mitral regurgitation and/or coronary artery disease and the management of these patients is discussed.
Resumo:
No abstract.
Resumo:
Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study.
Resumo:
New stars in galaxies form in dense, molecular clouds of the interstellar medium. Measuring how the mass is distributed in these clouds is of crucial importance for the current theories of star formation. This is because several open issues in them, such as the strength of different mechanism regulating star formation and the origin of stellar masses, can be addressed using detailed information on the cloud structure. Unfortunately, quantifying the mass distribution in molecular clouds accurately over a wide spatial and dynamical range is a fundamental problem in the modern astrophysics. This thesis presents studies examining the structure of dense molecular clouds and the distribution of mass in them, with the emphasis on nearby clouds that are sites of low-mass star formation. In particular, this thesis concentrates on investigating the mass distributions using the near infrared dust extinction mapping technique. In this technique, the gas column densities towards molecular clouds are determined by examining radiation from the stars that shine through the clouds. In addition, the thesis examines the feasibility of using a similar technique to derive the masses of molecular clouds in nearby external galaxies. The papers presented in this thesis demonstrate how the near infrared dust extinction mapping technique can be used to extract detailed information on the mass distribution in nearby molecular clouds. Furthermore, such information is used to examine characteristics crucial for the star formation in the clouds. Regarding the use of extinction mapping technique in nearby galaxies, the papers of this thesis show that deriving the masses of molecular clouds using the technique suffers from strong biases. However, it is shown that some structural properties can still be examined with the technique.
Resumo:
Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.
Resumo:
158 p. : graf.
Resumo:
Point-particle based direct numerical simulation (PPDNS) has been a productive research tool for studying both single-particle and particle-pair statistics of inertial particles suspended in a turbulent carrier flow. Here we focus on its use in addressing particle-pair statistics relevant to the quantification of turbulent collision rate of inertial particles. PPDNS is particularly useful as the interaction of particles with small-scale (dissipative) turbulent motion of the carrier flow is mostly relevant. Furthermore, since the particle size may be much smaller than the Kolmogorov length of the background fluid turbulence, a large number of particles are needed to accumulate meaningful pair statistics. Starting from the relative simple Lagrangian tracking of so-called ghost particles, PPDNS has significantly advanced our theoretical understanding of the kinematic formulation of the turbulent geometric collision kernel by providing essential data on dynamic collision kernel, radial relative velocity, and radial distribution function. A recent extension of PPDNS is a hybrid direct numerical simulation (HDNS) approach in which the effect of local hydrodynamic interactions of particles is considered, allowing quantitative assessment of the enhancement of collision efficiency by fluid turbulence. Limitations and open issues in PPDNS and HDNS are discussed. Finally, on-going studies of turbulent collision of inertial particles using large-eddy simulations and particle- resolved simulations are briefly discussed.
Resumo:
In this paper we consider the continuous weak measurement of a solid-state qubit by single electron transistors (SET). For single-dot SET, we find that in nonlinear response regime the signal-to-noise ratio can violate the universal upper bound imposed quantum mechanically on any linear response detectors. We understand the violation by means of the cross-correlation of the detector currents. For double-dot SET, we discuss its robustness against wider range of temperatures, quantum efficiency, and the relevant open issues unresolved.