949 resultados para inverse problem
Resumo:
The taxonomic position of the endemic New Zealand bat genus Mystacina has vexed systematists ever since its erection in 1843. Over the years the genus has been linked with many microchiropteran families and superfamilies. Most recent classifications place it in the Vespertilionoidea, although some immunological evidence links it with the Noctilionoidea (=Phyllostomoidea). We have sequenced 402 bp of the mitochondrial cytochrome b gene for M. tuberculata (Gray in Dieffenbach, 1843), and using both our own and published DNA sequences for taxa in both superfamilies, we applied different tree reconstruction methods to find the appropriate phylogeny and different methods of estimating confidence in the parts of the tree. All methods strongly support the classification of Mystacina in the Noctilionoidea. Spectral analysis suggests that parsimony analysis may be misleading for Mystacina's precise placement within the Noctilionoidea because of its long terminal branch. Analyses not susceptible to long-branch attraction suggest that the Mystacinidae is a sister family to the Phyllostomidae. Dating the divergence times between the different taxa suggests that the extant chiropteran families radiated around and shortly after the Cretaceous–Tertiary boundary. We discuss the biogeographical implications of classifying Mystacina within the Noctilionoidea and contrast our result with those classifications placing Mystacina in the Vespertilionoidea, concluding that evidence for the latter is weak.
Resumo:
The taxonomic position of the endemic New Zealand bat genus Mystacina has vexed systematists ever since its erection in 1843. Over the years the genus has been linked with many microchiropteran families and superfamilies. Most recent classifications place it in the Vespertilionoidea, although some immunological evidence links it with the Noctilionoidea (=Phyllostomoidea). We have sequenced 402 bp of the mitochondrial cytochrome b gene for M. tuberculata (Gray in Dieffenbach, 1843), and using both our own and published DNA sequences for taxa in both superfamilies, we applied different tree reconstruction methods to find the appropriate phylogeny and different methods of estimating confidence in the parts of the tree. All methods strongly support the classification of Mystacina in the Noctilionoidea. Spectral analysis suggests that parsimony analysis may be misleading for Mystacina's precise placement within the Noctilionoidea because of its long terminal branch. Analyses not susceptible to long-branch attraction suggest that the Mystacinidae is a sister family to the Phyllostomidae. Dating the divergence times between the different taxa suggests that the extant chiropteran families radiated around and shortly after the Cretaceous-Tertiary boundary. We discuss the biogeographical implications of classifying Mystacina within the Noctilionoidea and contrast our result with those classifications placing Mystacina in the Vespertilionoidea, concluding that evidence for the latter is weak.
Resumo:
This article discusses the design of interactive online activities that introduce problem solving skills to first year law students. They are structured around the narrative framework of ‘Ruby’s Music Festival’ where a young business entrepreneur encounters various issues when organising a music festival and students use a generic problem solving method to provide legal solutions. These online activities offer students the opportunity to obtain early formative feedback on their legal problem solving abilities prior to undertaking a later summative assessment task. The design of the activities around the Ruby narrative framework and the benefits of providing students with early formative feedback will be discussed.
Resumo:
An increasing range of technology services are now offered on a self-service basis. However, problems with self-service technologies (SSTs) occur at times due to the technical error, staff error, or consumers’ own mistakes. Considering the role of consumers as co-producers in the SST context, we aim to study consumer’s behaviours, strategies, and decision making in solving their problem with SST and identify the factors contributing to their persistence in solving the problem. This study contributes to the information systems research, as it is the first study that aims to identify such a process and the factors affecting consumers’ persistence in solving their problem with SST. A focus group with user support staff has been conducted, yielding some initial results that helped to conduct the next phases of the study. Next, using Critical Incident Technique, data will be gathered through focus groups with users, diary method, and think-aloud method.
Resumo:
Objective This study highlights the serious consequences of ignoring reverse causality bias in studies on compensation-related factors and health outcomes and demonstrates a technique for resolving this problem of observational data. Study Design and Setting Data from an English longitudinal study on factors, including claims for compensation, associated with recovery from neck pain (whiplash) after rear-end collisions are used to demonstrate the potential for reverse causality bias. Although it is commonly believed that claiming compensation leads to worse recovery, it is also possible that poor recovery may lead to compensation claims—a point that is seldom considered and never addressed empirically. This pedagogical study compares the association between compensation claiming and recovery when reverse causality bias is ignored and when it is addressed, controlling for the same observable factors. Results When reverse causality is ignored, claimants appear to have a worse recovery than nonclaimants; however, when reverse causality bias is addressed, claiming compensation appears to have a beneficial effect on recovery, ceteris paribus. Conclusion To avert biased policy and judicial decisions that might inadvertently disadvantage people with compensable injuries, there is an urgent need for researchers to address reverse causality bias in studies on compensation-related factors and health.
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
Staffing rural and remote schools is an important policy issue for the public good. This paper examines the private issues it also poses for teachers with families working in these communities, as they seek to reconcile careers with educational choices for children. The paper first considers historical responses to staffing rural and remote schools in Australia, and the emergence of neoliberal policy encouraging marketisation of the education sector. We report on interviews about considerations motivating household mobility with 11 teachers across regional, rural and remote communities in Queensland. Like other middle-class parents, these teachers prioritised their children’s educational opportunities over career opportunities. The analysis demonstrates how teachers in rural and remote communities constitute a special group of educational consumers with insider knowledge and unique dilemmas around school choice. Their heightened anxieties around school choice under neoliberal policy are shown to contribute to the public issue of staffing rural and remote schools.
Resumo:
The understanding of the loads generated within the prosthetic leg can aid engineers in the design of components and clinicians in the process of rehabilitation. Traditional methods to assess these loads have relied on inverse dynamics. This indirect method estimates the applied load using video recordings and force-plates located at a distance from the region of interest, such as the base of the residuum. The well-known limitations of this method are related to the accuracy of this recursive model and the experimental conditions required (Frossard et al., 2003). Recent developments in sensors (Frossard et al., 2003) and prosthetic fixation (Brånemark et al., 2000) permit the direct measurement of the loads applied on the residuum of transfemoral amputees. In principle, direct measurement should be an appropriate tool for assessing the accuracy of inverse dynamics. The purpose of this paper is to determine the validity of this assumption. The comparative variable used in this study is the velocity of the relative body center of mass (VCOM(t)). The relativity is used to align the static (w.r.t. position) force plate measurement with the dynamic load cell measurement.
Resumo:
In this study, a non-linear excitation controller using inverse filtering is proposed to damp inter-area oscillations. The proposed controller is based on determining generator flux value for the next sampling time which is obtained by maximising reduction rate of kinetic energy of the system after the fault. The desired flux for the next time interval is obtained using wide-area measurements and the equivalent area rotor angles and velocities are predicted using a non-linear Kalman filter. A supplementary control input for the excitation system, using inverse filtering approach, to track the desired flux is implemented. The inverse filtering approach ensures that the non-linearity introduced because of saturation is well compensated. The efficacy of the proposed controller with and without communication time delay is evaluated on different IEEE benchmark systems including Kundur's two area, Western System Coordinating Council three-area and 16-machine, 68-bus test systems.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
Trade union membership, both in aggregate numbers and in density, has declined in the majority of advanced economies globally over recent decades (Blanchflower, 2007). In Australia, the decline in the 1990s was somewhat more precipitate than in most countries (Peetz, 1998). As discussed in Chapter 1, reasons for the decline are multifactorial, including a more hostile environment to unionism created by employers and the state, difficulties ·with workplace union organisation, and structural change in the economy (Bryson and Gomez, 2005; Bryson et a!., 2011; Ebbinghaus et al., 2011; Payne, 1989; Waddington and Kerr, 2002; Waddington and Whitson, 1997). Our purpose in this chapter is to look beyond aggregate Australian union density data, to examine how age relates to membership decline, and how different age groups, particularly younger workers, are located in the story of union decline. The practical implications of this research are that understanding how unions relate to workers of different age groups, and to workers of different genders amongst those age groups, may lead to improved recruitment and better union organisation.
Resumo:
Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although, PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, the authors have included a classification of the approaches and they have identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.
Resumo:
Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.
Resumo:
Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today's commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.