998 resultados para ORDER STATISTIC
Resumo:
The aim of this paper is to provide a comparison of various algorithms and parameters to build reduced semantic spaces. The effect of dimension reduction, the stability of the representation and the effect of word order are examined in the context of the five algorithms bearing on semantic vectors: Random projection (RP), singular value decom- position (SVD), non-negative matrix factorization (NMF), permutations and holographic reduced representations (HRR). The quality of semantic representation was tested by means of synonym finding task using the TOEFL test on the TASA corpus. Dimension reduction was found to improve the quality of semantic representation but it is hard to find the optimal parameter settings. Even though dimension reduction by RP was found to be more generally applicable than SVD, the semantic vectors produced by RP are somewhat unstable. The effect of encoding word order into the semantic vector representation via HRR did not lead to any increase in scores over vectors constructed from word co-occurrence in context information. In this regard, very small context windows resulted in better semantic vectors for the TOEFL test.
Resumo:
This paper is based on an Australian Learning & Teaching Council (ALTC) funded evaluation in 13 universities across Australia and New Zealand of the use of Engineers Without Borders (EWB) projects in first-year engineering courses. All of the partner institutions have implemented this innovation differently and comparison of these implementations affords us the opportunity to assemble "a body of carefully gathered data that provides evidence of which approaches work for which students in which learning environments". This study used a mixed-methods data collection approach and a realist analysis. Data was collected by program logic analysis with course co-ordinators, observation of classes, focus groups with students, exit survey of students and interviews with staff as well as scrutiny of relevant course and curriculum documents. Course designers and co-ordinators gave us a range of reasons for using the projects, most of which alluded to their presumed capacity to deliver experience in and learning of higher order thinking skills in areas such as sustainability, ethics, teamwork and communication. For some students, however, the nature of the projects decreased their interest in issues such as ethical development, sustainability and how to work in teams. We also found that the projects provoked different responses from students depending on the nature of the courses in which they were embedded (general introduction, design, communication, or problem-solving courses) and their mode of delivery (lecture, workshop or online).
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
The pioneering work of Runge and Kutta a hundred years ago has ultimately led to suites of sophisticated numerical methods suitable for solving complex systems of deterministic ordinary differential equations. However, in many modelling situations, the appropriate representation is a stochastic differential equation and here numerical methods are much less sophisticated. In this paper a very general class of stochastic Runge-Kutta methods is presented and much more efficient classes of explicit methods than previous extant methods are constructed. In particular, a method of strong order 2 with a deterministic component based on the classical Runge-Kutta method is constructed and some numerical results are presented to demonstrate the efficacy of this approach.
Resumo:
In this paper, general order conditions and a global convergence proof are given for stochastic Runge Kutta methods applied to stochastic ordinary differential equations ( SODEs) of Stratonovich type. This work generalizes the ideas of B-series as applied to deterministic ordinary differential equations (ODEs) to the stochastic case and allows a completely general formalism for constructing high order stochastic methods, either explicit or implicit. Some numerical results will be given to illustrate this theory.
Resumo:
In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
In Burrage and Burrage [1] it was shown that by introducing a very general formulation for stochastic Runge-Kutta methods, the previous strong order barrier of order one could be broken without having to use higher derivative terms. In particular, methods of strong order 1.5 were developed in which a Stratonovich integral of order one and one of order two were present in the formulation. In this present paper, general order results are proven about the maximum attainable strong order of these stochastic Runge-Kutta methods (SRKs) in terms of the order of the Stratonovich integrals appearing in the Runge-Kutta formulation. In particular, it will be shown that if an s-stage SRK contains Stratonovich integrals up to order p then the strong order of the SRK cannot exceed min{(p + 1)/2, (s - 1)/2), p greater than or equal to 2, s greater than or equal to 3 or 1 if p = 1.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
In Virgtel Ltd v Zabusky [2009] QCA 92 the Queensland Court of Appeal considered the scope of an order “as to costs only” within the meaning of s 253 of the Supreme Court Act 1995 (Qld) (‘the Act”). The Court also declined to accept submissions from one of the parties after oral hearing, and made some useful comments which serve as a reminder to practitioners of their obligations in that regard.
Resumo:
The decision of Applegarth J in Heartwood Architectural & Joinery Pty Ltd v Redchip Lawyers [2009] QSC 195 (27 July 2009) involved a costs order against solicitors personally. This decision is but one of several recent decisions in which the court has been persuaded that the circumstances justified costs orders against legal practitioners on the indemnity basis. These decisions serve as a reminder to practitioners of their disclosure obligations when seeking any interlocutory relief in an ex parte application. These obligations are now clearly set out in r 14.4 of the Legal Profession (Solicitors) Rule 2007 and r 25 of 2007 Barristers Rule. Inexperience or ignorance will not excuse breaches of the duties owed to the court.
Resumo:
Purpose: To use a large wavefront database of a clinical population to investigate relationships between refractions and higher order aberrations and between aberrations of right and left eyes. Methods: Third and fourth-order aberration coefficients and higher-order root-mean-squared aberrations (HO RMS), scaled to a pupil size of 4.5 mm diameter, were analysed in a population of about 24,000 patients from Carl Zeiss Vision's European wavefront database. Correlations were determined between the aberrations and the variables of refraction, near addition and cylinder. Results: Most aberration coefficients were significantly dependent upon these variables, but the proportions of aberrations that could be explained by these factors were less than 2% except for spherical aberration (12%), horizontal coma (9%) and HO RMS (7%). Near addition was the major contributor for horizontal coma (8.5% out of 9.5%) and spherical equivalent was the major contributor for spherical aberration (7.7% out of 11.6%). Interocular correlations were highly significant for all aberration coefficients, varying between 0.16 and 0.81. Anisometropia was a variable of significance for three aberrations (vertical coma, secondary astigmatism and tetrafoil), but little importance can be placed on this because of the small proportions of aberrations that can be explained by refraction (all less than 1.0 %). Conclusions: Most third- and fourth-order aberration coefficients were significantly dependent upon spherical equivalent, near addition and cylinder, but only horizontal coma (9%) and spherical aberration (12%) showed dependencies of greater than 2%. Interocular correlations were highly significant for all aberration coefficients, but anisometropia had little influence on aberration coefficients.
Resumo:
This article uses critical discourse analysis to analyse material shifts in the political economy of communications. It examines texts of major corporations to describe four key changes in political economy: (1) the separation of ownership from control; (2) the separation of business from industry; (3) the separation of accountability from responsibility; and (4) the subjugation of ‘going concerns’ by overriding concerns. The authors argue that this amounts to a political economic shift from traditional concepts of ‘capitalism’ to a new ‘corporatism’ in which the relationships between public and private, state and individual interests have become redefined and obscured through new discourse strategies. They conclude that the present financial and regulatory ‘crisis’ cannot be adequately resolved without a new analytic framework for examining the relationships between corporation, discourse and political economy.
Resumo:
A well-developed brand helps to establish a solid identity and creates support to an image that is coherent to the actual motivations in an institution. Educational institutions have inherent characteristics that are diverse from the other sort of institutions, mainly when the focus is set on its internal and external publics. Consequently, these institutions should deal with the development of their brand and identity system also in a different approach. This research aims to investigate the traditional methodology for brand and identity systems development and proposes some modifications in order to allow a broader inclusion of the stakeholders in the process. The implementation of the new Oceanography Course in the Federal University of Bahia (UFBA) offered a unique opportunity to investigate and test these new strategies. In order to investigate and relate the image, identity, interaction and experience concepts through a participative methodology, this research project applies the new suggested strategies in the development of a brand and an identity system for the Oceanography Course in UFBA. Open surveys have been carried out between the alumni, lecturers and coordination body, in order to discover and establish a symbol for the course. The statistic analysis of the surveys’ results showed clear aesthetic preferences to some icons and colours to represent the course. The participative methodology celebrated, in this project, a democratization of the generally expert-centred brand development process.