972 resultados para Procedure for Multiple Classifications
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
In Inglis v Connell [2003] QDC 029 the court considered s6(3) of the Personal Injuries Proceedings Act 2002 in relation to the application of the Act. The conclusion reached was that the provision should be interpreted as providing that the requirements of the Act do not apply in respect of personal injury the subject of any proceeding commenced before June 18, 2002.
Resumo:
In McCoombes v Curragh Queensland Mining Ltd [2001] QDC 142 the court considered a number of significant issues in relation to assessments of costs under the Uniform Civil Procedure Rules 1999 (Qld). The Court of Appeal subsequently declined an application for leave to appeal the decision under s118(3) of the District Court Act 1967 (McCoombes v Curragh Queensland Mining Ltd [2001] QCA 379. The judgment in the District Court, and on some matters the subsequent observations in the Court of Appeal, provide clarification in respect of many issues relating the assessment of costs under the UCPR.
Resumo:
This paper introduces a straightforward method to asymptotically solve a variety of initial and boundary value problems for singularly perturbed ordinary differential equations whose solution structure can be anticipated. The approach is simpler than conventional methods, including those based on asymptotic matching or on eliminating secular terms. © 2010 by the Massachusetts Institute of Technology.
Resumo:
NLS is a stream cipher which was submitted to the eSTREAM project. A linear distinguishing attack against NLS was presented by Cho and Pieprzyk, which was called Crossword Puzzle (CP) attack. NLSv2 is a tweak version of NLS which aims mainly at avoiding the CP attack. In this paper, a new distinguishing attack against NLSv2 is presented. The attack exploits high correlation amongst neighboring bits of the cipher. The paper first shows that the modular addition preserves pairwise correlations as demonstrated by existence of linear approximations with large biases. Next, it shows how to combine these results with the existence of high correlation between bits 29 and 30 of the S-box to obtain a distinguisher whose bias is around 2^−37. Consequently, we claim that NLSv2 is distinguishable from a random cipher after observing around 2^74 keystream words.
Resumo:
The value of information technology (IT) is often realized when continuously being used after users’ initial acceptance. However, previous research on continuing IT usage is limited for dismissing the importance of mental goals in directing users’ behaviors and for inadequately accommodating the group context of users. This in-progress paper offers a synthesis of several literature to conceptualize continuing IT usage as multilevel constructs and to view IT usage behavior as directed and energized by a set of mental goals. Drawing from the self-regulation theory in the social psychology, this paper proposes a process model, positioning continuing IT usage as multiple-goal pursuit. An agent-based modeling approach is suggested to further explore causal and analytical implications of the proposed process model.
Resumo:
The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.
Resumo:
This (seat) attribute target list and Design for Comfort taxonomy report is based on the literature review report (C3-21, Milestone 1), which specified different areas (factors) with specific influence on automotive seat comfort. The attribute target list summarizes seat factors established in the literature review (Figure 1) and subsumes detailed attributes derived from the literature findings within these factors/classes. The attribute target list (Milestone 2) then provides the basis for the “Design for Comfort” taxonomy (Milestone 3) and helps the project develop target settings (values) that will be measured during the testing phase of the C3-21 project. The attribute target list will become the core technical description of seat attributes, to be incorporated into the final comfort procedure that will be developed. The Attribute Target List and Design for Comfort Taxonomy complete the target definition process. They specify the context, markets and application (vehicle classes) for seat development. As multiple markets are addressed, the target setting requires flexibility of variables to accommodate the selected customer range. These ranges will be consecutively filled with data in forthcoming studies. The taxonomy includes how and where the targets are derived, reference points and standards, engineering and subjective data from previous studies as well as literature findings. The comfort parameters are ranked to identify which targets, variables or metrics have the biggest influence on comfort. Comfort areas included are seat kinematics (adjustability), seat geometry and pressure distribution (static comfort), seat thermal behavior and noise/vibration transmissibility (cruise comfort) and eventually material properties, design and features (seat harmony). Data from previous studies is fine tuned and will be validated in the nominated contexts and markets in follow-up dedicated studies.
Resumo:
This paper presents a novel method to rank map hypotheses by the quality of localization they afford. The highest ranked hypothesis at any moment becomes the active representation that is used to guide the robot to its goal location. A single static representation is insufficient for navigation in dynamic environments where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners. In our approach we simultaneously rank multiple map hypotheses by the influence that localization in each of them has on locally accurate odometry. This is done online for the current locally accurate window by formulating a factor graph of odometry relaxed by localization constraints. Comparison of the resulting perturbed odometry of each hypothesis with the original odometry yields a score that can be used to rank map hypotheses by their utility. We deploy the proposed approach on a real robot navigating a structurally noisy office environment. The configuration of the environment is physically altered outside the robots sensory horizon during navigation tasks to demonstrate the proposed approach of hypothesis selection.
Resumo:
The business model concept is gaining traction in different disciplines but is still criticized for being fuzzy and vague and lacking consensus on its definition and compositional elements. In this paper we set out to advance our understanding of the business model concept by addressing three areas of foundational research: business model definitions, business model elements, and business model archetypes. We define a business model as a representation of the value logic of an organization in terms of how it creates and captures customer value. This abstract and generic definition is made more specific and operational by the compositional elements that need to address the customer, value proposition, organizational architecture (firm and network level) and economics dimensions. Business model archetypes complement the definition and elements by providing a more concrete and empirical understanding of the business model concept. The main contributions of this paper are (1) explicitly including the customer value concept in the business model definition and focussing on value creation, (2) presenting four core dimensions that business model elements need to cover, (3) arguing for flexibility by adapting and extending business model elements to cater for different purposes and contexts (e.g. technology, innovation, strategy),(4) stressing a more systematic approach to business model archetypes by using business model elements for their description, and (5) suggesting to use business model archetype research for the empirical exploration and testing of business model elements and their relationships.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an integral management activity in data centers for power saving, load balancing and system maintenance. While state-of-the-art live migration techniques focus on the improvement of migration performance of an independent single VM, only a little has been investigated to the case of live migration of multiple interacting VMs. Live migration is mostly influenced by the network bandwidth and arbitrarily migrating a VM which has data inter-dependencies with other VMs may increase the bandwidth consumption and adversely affect the performances of subsequent migrations. In this paper, we propose a Random Key Genetic Algorithm (RKGA) that efficiently schedules the migration of a given set of VMs accounting both inter-VM dependency and data center communication network. The experimental results show that the RKGA can schedule the migration of multiple VMs with significantly shorter total migration time and total downtime compared to a heuristic algorithm.
Resumo:
This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.
Resumo:
Introduction and aims: Despite evidence that many Australian adolescents have considerable experience with various drug types, little is known about the extent to which adolescents use multiple substances. The aim of this study was to examine the degree of clustering of drug types within individuals, and the extent to which demographic and psychosocial predictors are related to cluster membership. Design and method: A sample of 1402 adolescents aged 12-17. years were extracted from the Australian 2007 National Drug Strategy Household Survey. Extracted data included lifetime use of 10 substances, gender, psychological distress, physical health, perceived peer substance use, socioeconomic disadvantage, and regionality. Latent class analysis was used to determine clusters, and multinomial logistic regression employed to examine predictors of cluster membership. Result: There were 3 latent classes. The great majority (79.6%) of adolescents used alcohol only, 18.3% were limited range multidrug users (encompassing alcohol, tobacco, and marijuana), and 2% were extended range multidrug users. Perceived peer drug use and psychological distress predicted limited and extended multiple drug use. Psychological distress was a more significant predictor of extended multidrug use compared to limited multidrug use. Discussion and conclusion: In the Australian school-based prevention setting, a very strong focus on alcohol use and the linkages between alcohol, tobacco and marijuana are warranted. Psychological distress may be an important target for screening and early intervention for adolescents who use multiple drugs.
Resumo:
This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e. the autonomous vehicles’ ability to make appropriate driving decisions in city road traffic situations. After decomposing the problem into two consecutive decision making stages, and giving a short overview about previous work, the paper explains how Multiple Criteria Decision Making (MCDM) can be used in the process of selecting the most appropriate driving maneuver.
Resumo:
A straightforward procedure for the acid digestion of geological samples with SiO2 concentrations ranging between about 40 to 80%, is described. A powdered sample (200 mesh) of 500 mg was used and fused with 1000 mg spectroflux at about 1000 degreesC in a platinum crucible. The molten was subsequently digested in an aqueous solution of HNO3 at 100 degreesC. Several systematic digestion procedures were followed using various concentrations of HNO3. It was found that a relationship could be established between the dissolution-time and acid concentration. For an acid concentration of 15% an optimum dissolution-time of under 4 min was recorded. To verify that the dissolutions were complete, they were subjected to rigorous quality control tests. The turbidity and viscosity were examined at different intervals and the results were compared with that of deionised water. No significant change in either parameter was observed. The shelf-life of each solution lasted for several months, after which time polymeric silicic acid formed in some solutions, resulting in the presence of a gelatinous solid. The method is cost effective and is clearly well suited for routine applications on a small scale, especially in laboratories in developing countries. ICP-MS was applied to the determination of 13 Rare Earth Elements and Hf in a set of 107 archaeological samples subjected to the above digestion procedure. The distribution of these elements was examined and the possibility of using the REE's for provenance studies is discussed.