184 resultados para multiplicative inverse
Resumo:
This paper demonstrates the application of inverse filtering technique for power systems. In order to implement this method, the control objective should be based on a system variable that needs to be set on a specific value for each sampling time. A control input is calculated to generate the desired output of the plant and the relationship between the two is used design an auto-regressive model. The auto-regressive model is converted to a moving average model to calculate the control input based on the future values of the desired output. Therefore, required future values to construct the output are predicted to generate the appropriate control input for the next sampling time.
Resumo:
Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.
Resumo:
Two dimensional flow of a micropolar fluid in a porous channel is investigated. The flow is driven by suction or injection at the channel walls, and the micropolar model due to Eringen is used to describe the working fluid. An extension of Berman's similarity transform is used to reduce the governing equations to a set of non-linear coupled ordinary differential equations. The latter are solved for large mass transfer via a perturbation analysis where the inverse of the cross-flow Reynolds number is used as the perturbing parameter. Complementary numerical solutions for strong injection are also obtained using a quasilinearisation scheme, and good agreement is observed between the solutions obtained from the perturbation analysis and the computations.
Resumo:
Automatic detection of suspicious activities in CCTV camera feeds is crucial to the success of video surveillance systems. Such a capability can help transform the dumb CCTV cameras into smart surveillance tools for fighting crime and terror. Learning and classification of basic human actions is a precursor to detecting suspicious activities. Most of the current approaches rely on a non-realistic assumption that a complete dataset of normal human actions is available. This paper presents a different approach to deal with the problem of understanding human actions in video when no prior information is available. This is achieved by working with an incomplete dataset of basic actions which are continuously updated. Initially, all video segments are represented by Bags-Of-Words (BOW) method using only Term Frequency-Inverse Document Frequency (TF-IDF) features. Then, a data-stream clustering algorithm is applied for updating the system's knowledge from the incoming video feeds. Finally, all the actions are classified into different sets. Experiments and comparisons are conducted on the well known Weizmann and KTH datasets to show the efficacy of the proposed approach.
Resumo:
The purpose of this study was to identify the pedagogical knowledge relevant to the successful completion of a pie chart item. This purpose was achieved through the identification of the essential fluencies that 12–13-year-olds required for the successful solution of a pie chart item. Fluency relates to ease of solution and is particularly important in mathematics because it impacts on performance. Although the majority of students were successful on this multiple choice item, there was considerable divergence in the strategies they employed. Approximately two-thirds of the students employed efficient multiplicative strategies, which recognised and capitalised on the pie chart as a proportional representation. In contrast, the remaining one-third of students used a less efficient additive strategy that failed to capitalise on the representation of the pie chart. The results of our investigation of students’ performance on the pie chart item during individual interviews revealed that five distinct fluencies were involved in the solution process: conceptual (understanding the question), linguistic (keywords), retrieval (strategy selection), perceptual (orientation of a segment of the pie chart) and graphical (recognising the pie chart as a proportional representation). In addition, some students exhibited mild disfluencies corresponding to the five fluencies identified above. Three major outcomes emerged from the study. First, a model of knowledge of content and students for pie charts was developed. This model can be used to inform instruction about the pie chart and guide strategic support for students. Second, perceptual and graphical fluency were identified as two aspects of the curriculum, which should receive a greater emphasis in the primary years, due to their importance in interpreting pie charts. Finally, a working definition of fluency in mathematics was derived from students’ responses to the pie chart item.
Resumo:
This paper compares the performances of two different optimisation techniques for solving inverse problems; the first one deals with the Hierarchical Asynchronous Parallel Evolutionary Algorithms software (HAPEA) and the second is implemented with a game strategy named Nash-EA. The HAPEA software is based on a hierarchical topology and asynchronous parallel computation. The Nash-EA methodology is introduced as a distributed virtual game and consists of splitting the wing design variables - aerofoil sections - supervised by players optimising their own strategy. The HAPEA and Nash-EA software methodologies are applied to a single objective aerodynamic ONERA M6 wing reconstruction. Numerical results from the two approaches are compared in terms of the quality of model and computational expense and demonstrate the superiority of the distributed Nash-EA methodology in a parallel environment for a similar design quality.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
Objective: This review addresses the effect of overweight and obese weight status on pediatric health-related quality of life (HRQOL). Method: Web of Science, Medline, CINAHL, Cochrane Library, EMBASE, AMED and PubMed were searched for peer-reviewed studies in English reporting HRQOL and weight status in youth (<21 years), published before March 2008. Results: Twenty-eight articles were identified. Regression of HRQOL against body mass index (BMI) using pooled data from 13 studies utilizing the Pediatric Quality of Life Inventory identified an inverse relationship between BMI and pediatric HRQOL (r=−0.7, P=0.008), with impairments in physical and social functioning consistently reported. HRQOL seemed to improve with weight loss, but randomized controlled trials were few and lacked long-term follow-up. Conclusions: Little is known about the factors associated with reduced HRQOL among overweight or obese youth, although gender, age and obesity-related co-morbidities may play a role. Few studies have examined the differences in HRQOL between community and treatment-seeking samples. Pooled regressions suggest pediatric self-reported HRQOL can be predicted from parent proxy reports, although parents of obese youths tend to perceive worse HRQOL than children do about themselves. Thus, future research should include both pediatric and parent proxy perspectives.
Resumo:
This paper describes protection and control of a microgrid with converter interfaced micro sources. The proposed protection and control scheme consider both grid connected and autonomous operation of the microgrid. A protection scheme, capable of detecting faults effectively in both grid connected and islanded operations is proposed. The main challenge of the protection, due to current limiting state of the converters is overcome by using admittance relays. The relays operate according to the inverse time characteristic based on measured admittance of the line. The proposed scheme isolates the fault from both sides, while downstream side of the microgrid operates in islanding condition. Moreover faults can be detected in autonomous operation. In grid connected mode distributed generators (DG) supply the rated power while in absence of the grid, DGs share the entire power requirement proportional to rating based on output voltage angle droop control. The protection scheme ensures minimum load shedding with isolating the faulted network and DG control provides a smooth islanding and resynchronization operation. The efficacy of coordinated control and protection scheme has been validated through simulation for various operating conditions.
Resumo:
The missing-item format and interrupted behaviour chain strategy have been used to increase spontaneous requests among children with developmental disabilities, but their relative effectiveness has not been compared. The present study compared the extent to which each strategy evoked spontaneous requests and challenging behaviour in three children with autism. Sessions where a needed item was withheld (missing-item format) were compared to sessions involving the removal of a needed item (interrupted behaviour chain strategy). Comparisons were conducted across three activates in an alternating treatments design. Both strategies evoked spontaneous requests with no significant difference in effectiveness. Few differences were obtained in the amount of challenging behaviour evoked but the two conditions, although a moderate inverse relationship between spontaneous requesting and challenging behaviour was observed. The results suggest that theses two procedures yield similar outcomes. Concurrent use of both strategies may enable teachers to create a greater number of opportunities for requesting.
Resumo:
The highly variable flagellin-encoding flaA gene has long been used for genotyping Campylobacter jejuni and Campylobacter coli. High-resolution melting (HRM) analysis is emerging as an efficient and robust method for discriminating DNA sequence variants. The objective of this study was to apply HRM analysis to flaA-based genotyping. The initial aim was to identify a suitable flaA fragment. It was found that the PCR primers commonly used to amplify the flaA short variable repeat (SVR) yielded a mixed PCR product unsuitable for HRM analysis. However, a PCR primer set composed of the upstream primer used to amplify the fragment used for flaA restriction fragment length polymorphism (RFLP) analysis and the downstream primer used for flaA SVR amplification generated a very pure PCR product, and this primer set was used for the remainder of the study. Eighty-seven C. jejuni and 15 C. coli isolates were analyzed by flaA HRM and also partial flaA sequencing. There were 47 flaA sequence variants, and all were resolved by HRM analysis. The isolates used had previously also been genotyped using single-nucleotide polymorphisms (SNPs), binary markers, CRISPR HRM, and flaA RFLP. flaAHRManalysis provided resolving power multiplicative to the SNPs, binary markers, and CRISPR HRM and largely concordant with the flaA RFLP. It was concluded that HRM analysis is a promising approach to genotyping based on highly variable genes.
Resumo:
Student understanding of decimal number is poor (e.g., Baturo, 1998; Behr, Harel, Post & Lesh, 1992). This paper reports on a study which set out to determine the cognitive complexities inherent in decimal-number numeration and what teaching experiences need to be provided in order to facilitate an understanding of decimal-number numeration. The study gave rise to a theoretical model which incorporated three levels of knowledge. Interview tasks were developed from the model to probe 45 students’ understanding of these levels, and intervention episodes undertaken to help students construct the baseline knowledge of position and order (Level 1 knowledge) and an understanding of multiplicative structure (Level 3 knowledge). This paper describes the two interventions and reports on the results which suggest that helping students construct appropriate mental models is an efficient and effective teaching strategy.
Resumo:
Principal Topic : Nascent entrepreneurship has drawn the attention of scholars in the last few years (Davidsson, 2006, Wagner, 2004). However, most studies have asked why firms are created focussing on questions such as what are the characteristics (Delmar and Davidsson, 2000) and motivations (Carter, Gartner, Shaver & Reynolds, 2004) of nascent entrepreneurs, or what are the success factors in venture creation (Davidsson & Honig; 2003; Delmar and Shane, 2004). In contrast, the question of how companies emerge is still in its infancy. On a theoretical side, effectuation, developed by Sarasvathy (2001) offers one view of the strategies that may be at work during the venture creation process. Causation, the theorized inverse to effectuation, may be described as a rational reasoning method to create a company. After a comprehensive market analysis to discover opportunities, the entrepreneur will select the alternative with the higher expected return and implement it through the use of a business plan. In contrast, effectuation suggests that the future entrepreneur will develop her new venture in a more iterative way by selecting possibilities through flexibility and interaction with the market, affordability of loss of resources and time invested, development of pre-commitments and alliances from stakeholders. Another contrasting point is that causation is ''goal driven'' while an effectual approach is ''mean driven'' (Sarasvathy, 2001) One of the predictions of effectuation theory is effectuation is more likely to be used by entrepreneurs early in the venture creation process (Sarasvathy, 2001). However, this temporal aspect and the impact of the effectuation strategy on the venture outcomes has so far not been systematically and empirically tested on large samples. The reason behind this research gap is twofold. Firstly, few studies collect longitudinal data on emerging ventures at an early enough stage of development to avoid severe survivor bias. Second, the studies that collect such data have not included validated measures of effectuation. The research we are conducting attempts to partially fill this gap by combining an empirical investigation on a large sample of nascent and young firms with the effectuation/causation continuum as a basis (Sarasvathy, 2001). The objectives are to understand the strategies used by the firms during the creation process and measure their impacts on the firm outcomes. Methodology/Key Propositions : This study draws its data from the first wave of the CAUSEE project where 28,383 Australian households were randomly contacted by phone using a specific methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that are not operating yet) and 514 young firms (i.e., firms that have started operating from 2004) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 1108 ventures. In a likewise comprehensive follow-up 12 months later, 80% of the eligible cases completed the interview. The questionnaire contains specific sections designed to distinguish effectual and causal processes, innovation, gestation activities, business idea changes and ventures outcomes. The effectuation questions are based on the components of effectuation strategy as described by Sarasvathy (2001) namely: flexibility, affordable loss and pre-commitment from stakeholders. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The first two waves of data have will be used to test and compare the use of effectuation in the venture creation process. To increase the robustness of the results, temporal use of effectuation will be tested both directly and indirectly. 1. By comparing the use of effectuation in nascent and young firms from wave 1 to 2, we will be able to find out how effectuation is affected by time over a 12-month duration and if the stage of venture development has an impact on its use. 2. By comparing nascent ventures early in the creation process versus nascent ventures late in the creation process. Early versus late can be determined with the help of time-stamped gestation activity questions included in the survey. This will help us to determine the change on a small time scale during the creation phase of the venture. 3. By comparing nascent firms to young (already operational) firms. 4. By comparing young firms becoming operational in 2006 with those first becoming operational in 2004. Results and Implications : Wave 1 and 2 data have been completed and wave 2 is currently being checked and 'cleaned'. Analysis work will commence in September, 2009. This paper is expected to contribute to the body of knowledge on effectuation by measuring quantitatively its use and impact on nascent and young firms activities at different stages of their development. In addition, this study will also increase the understanding of the venture creation process by comparing over time nascent and young firms from a large sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the changes identified may be due to several factors and may not only be attributed to the use/not use of effectuation. Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent and young firms during their creation and early operating stages.
Resumo:
Nanoindentation is a useful technique for probing the mechanical properties of bone, and finite element (FE) modeling of the indentation allows inverse determination of elasto-plastic constitutive properties. However, FE simulations to date have assumed frictionless contact between indenter and bone. The aim of this study was to explore the effect of friction in simulations of bone nanoindentation. Two dimensional axisymmetric FE simulations were performed using a spheroconical indenter of tip radius 0.6m and angle 90°. The coefficient of friction between indenter and bone was varied between 0.0 (frictionless) and 0.3. Isotropic linear elasticity was used in all simulations, with bone elastic modulus E=13.56GPa and Poisson’s ratio =0.3. Plasticity was incorporated using both Drucker-Prager and von Mises yield surfaces. Friction had a modest effect on the predicted force-indentation curve for both von Mises and Drucker-Prager plasticity, reducing maximum indenter displacement by 10% and 20% respectively as friction coefficient was increased from zero to 0.3 (at a maximum indenter force of 5mN). However, friction has a much greater effect on predicted pile-up after indentation, reducing predicted pile-up from 0.27m to 0.11m with a von Mises model, and from 0.09m to 0.02m with Drucker-Prager plasticity. We conclude that it is important to include friction in nanoindentation simulations of bone.