847 resultados para classification methods
Resumo:
Fractional mathematical models represent a new approach to modelling complex spatial problems in which there is heterogeneity at many spatial and temporal scales. In this paper, a two-dimensional fractional Fitzhugh-Nagumo-monodomain model with zero Dirichlet boundary conditions is considered. The model consists of a coupled space fractional diffusion equation (SFDE) and an ordinary differential equation. For the SFDE, we first consider the numerical solution of the Riesz fractional nonlinear reaction-diffusion model and compare it to the solution of a fractional in space nonlinear reaction-diffusion model. We present two novel numerical methods for the two-dimensional fractional Fitzhugh-Nagumo-monodomain model using the shifted Grunwald-Letnikov method and the matrix transform method, respectively. Finally, some numerical examples are given to exhibit the consistency of our computational solution methodologies. The numerical results demonstrate the effectiveness of the methods.
Resumo:
Transport processes within heterogeneous media may exhibit non-classical diffusion or dispersion; that is, not adequately described by the classical theory of Brownian motion and Fick's law. We consider a space fractional advection-dispersion equation based on a fractional Fick's law. The equation involves the Riemann-Liouville fractional derivative which arises from assuming that particles may make large jumps. Finite difference methods for solving this equation have been proposed by Meerschaert and Tadjeran. In the variable coefficient case, the product rule is first applied, and then the Riemann-Liouville fractional derivatives are discretised using standard and shifted Grunwald formulas, depending on the fractional order. In this work, we consider a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Grunwald formulas are used to discretise the fractional derivatives at control volume faces. We compare the two methods for several case studies from the literature, highlighting the convenience of the finite volume approach.
Resumo:
Rayleigh–Stokes problems have in recent years received much attention due to their importance in physics. In this article, we focus on the variable-order Rayleigh–Stokes problem for a heated generalized second grade fluid with fractional derivative. Implicit and explicit numerical methods are developed to solve the problem. The convergence, stability of the numerical methods and solvability of the implicit numerical method are discussed via Fourier analysis. Moreover, a numerical example is given and the results support the effectiveness of the theoretical analysis.
Resumo:
Spreadsheet for Creative City Index 2012
Resumo:
Quantitative imaging methods to analyze cell migration assays are not standardized. Here we present a suite of two–dimensional barrier assays describing the collective spreading of an initially–confined population of 3T3 fibroblast cells. To quantify the motility rate we apply two different automatic image detection methods to locate the position of the leading edge of the spreading population after 24, 48 and 72 hours. These results are compared with a manual edge detection method where we systematically vary the detection threshold. Our results indicate that the observed spreading rates are very sensitive to the choice of image analysis tools and we show that a standard measure of cell migration can vary by as much as 25% for the same experimental images depending on the details of the image analysis tools. Our results imply that it is very difficult, if not impossible, to meaningfully compare previously published measures of cell migration since previous results have been obtained using different image analysis techniques and the details of these techniques are not always reported. Using a mathematical model, we provide a physical interpretation of our edge detection results. The physical interpretation is important since edge detection algorithms alone do not specify any physical measure, or physical definition, of the leading edge of the spreading population. Our modeling indicates that variations in the image threshold parameter correspond to a consistent variation in the local cell density. This means that varying the threshold parameter is equivalent to varying the location of the leading edge in the range of approximately 1–5% of the maximum cell density.
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.
Resumo:
Qualitative Health Psychology aims to contribute to the debate about the nature of psychology and of science through ‘an examination of the role of qualitative research within health psychology’ (p. 3). The editors, in bringing together contributors from the UK, Ireland, Canada, Brazil, New Zealand and Australia, have compiled a text that reflects different uses of qualitative health research in diverse social and cultural contexts. Structured into three parts, the book encompasses key theoretical and methodological issues in qualitative research in its attempt to encourage broad epistemological debate within health psychology.
Resumo:
The appearance of poststructuralism as a research methodology in public health literature raises questions about the history and purpose of this research. We examine (a) some aspects of the history of qualitative methods and their place within larger social and research domains, and (b) the purposes of a public health research that employs poststructuralist philosophy delineating the methodological issues that require consideration in positing a poststructural analysis. We argue against poststructuralism becoming a research methodology deployed to seize the pubic health debate, rather than being employed for its own particular critical strengths.
Resumo:
This dissertation analyses how physical objects are translated into digital artworks using techniques which can lead to ‘imperfections’ in the resulting digital artwork that are typically removed to arrive at a ‘perfect’ final representation. The dissertation discusses the adaptation of existing techniques into an artistic workflow that acknowledges and incorporates the imperfections of translation into the final pieces. It presents an exploration of the relationship between physical and digital artefacts and the processes used to move between the two. The work explores the 'craft' of digital sculpting and the technology used in producing what the artist terms ‘a naturally imperfect form’, incorporating knowledge of traditional sculpture, an understanding of anatomy and an interest in the study of bones (Osteology). The outcomes of the research are presented as a series of digital sculptural works, exhibited as a collection of curiosities in multiple mediums, including interactive game spaces, augmented reality (AR), rapid prototype prints (RP) and video displays.
Resumo:
Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.
Resumo:
Background: Studies on the relationship between performance and design of the throwing frame have been limited and therefore require further investigation. Objectives: The specific objectives were to provide benchmark information about performance and whole body positioning of male athletes in F30s classes. Study Design: Descriptive analysis. Methods: A total of 48 attempts performed by 12 stationary discus throwers in F33 and F34 classes during seated discus throwing event of 2002 International Paralympic Committee Athletics World Championships were analysed in this study. The whole body positioning included overall throwing posture (i.e. number of points of contact between the thrower and the frame, body position, throwing orientation and throwing side) and lower limb placements (i.e. seating arrangements, points of contact on the both feet, type of attachment of both legs and feet). Results: Three (25%), five (42%), one (8%) and three (25%) athletes used from three to six points of contact, respectively. Seven (58%) and five (42%) athletes threw from a standing or a seated position, respectively. A straddle, a stool or a chair was used by six (50%), four (33%) or two (17%) throwers, respectively. Conclusions: This study provides key information for a better understanding of the interaction between throwing technique of elite seated throwers and their throwing frame.
Resumo:
1. Autonomous acoustic recorders are widely available and can provide a highly efficient method of species monitoring, especially when coupled with software to automate data processing. However, the adoption of these techniques is restricted by a lack of direct comparisons with existing manual field surveys. 2. We assessed the performance of autonomous methods by comparing manual and automated examination of acoustic recordings with a field-listening survey, using commercially available autonomous recorders and custom call detection and classification software. We compared the detection capability, time requirements, areal coverage and weather condition bias of these three methods using an established call monitoring programme for a nocturnal bird, the little spotted kiwi(Apteryx owenii). 3. The autonomous recorder methods had very high precision (>98%) and required <3% of the time needed for the field survey. They were less sensitive, with visual spectrogram inspection recovering 80% of the total calls detected and automated call detection 40%, although this recall increased with signal strength. The areal coverage of the spectrogram inspection and automatic detection methods were 85% and 42% of the field survey. The methods using autonomous recorders were more adversely affected by wind and did not show a positive association between ground moisture and call rates that was apparent from the field counts. However, all methods produced the same results for the most important conservation information from the survey: the annual change in calling activity. 4. Autonomous monitoring techniques incur different biases to manual surveys and so can yield different ecological conclusions if sampling is not adjusted accordingly. Nevertheless, the sensitivity, robustness and high accuracy of automated acoustic methods demonstrate that they offer a suitable and extremely efficient alternative to field observer point counts for species monitoring.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Overview: - Development of mixed methods research - Benefits and challenges of “mixing” - Different models - Good design - Two examples - How to report? - Have a go!
Resumo:
Over a seven-year period, Mark Radvan directed a suite of children’s theatre productions adapted from the original Tashi stories by Australian writers Anna and Barbara Fienberg. The Tashi Project’s repertoire of plays performed to over 40,000 children aged between 3 and 10 years old, and their carers, in seasons at the Out of the Box Festival, at Brisbane Powerhouse and in venues across Australia in two interstate tours in 2009 and 2010. The project investigated how best to combine an exploration of theatrical forms and conventions, with a performance style evolved in a specially developed training program and a deliberate positioning of young children as audiences capable of sophisticated readings of action, symbol, theme and character. The results of this project show that when brought into appropriate relationship with the theatre artists, young children aged 3-5 can engage with sophisticated narrative forms, and with the right contextual framing they enjoy heightened dramatic and emotional tension, bringing to the event sustained and highly engaged concentration. Older children aged 6-10 also bring sustained and heightened engagement to the same stories, providing that other more sophisticated dramatic elements are woven into the construction of the performances, such as character, theme and style.