934 resultados para Linear system solve


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Why we need to base childrens’ sport and physical education on the principles of dynamical systems theory and ecological psychology As the childhood years are crucial for developing many physical skills as well as establishing the groundwork leading to lifelong participation in sport and physical activities, (Orlick & Botterill, 1977, p. 11) it is essential to examine current practice to make sure it is meeting the needs of children. In recent papers (e.g. Renshaw, Davids, Chow & Shuttleworth, in press; Renshaw, Davids, Chow & Hammond, in review; Chow et al., 2009) we have highlighted that a guiding theoretical framework is needed to provide a principled approach to teaching and coaching and that the approach must be evidence- based and focused on mechanism and not just on operational issues such as practice, competition and programme management (Lyle, 2002). There is a need to demonstrate how nonlinear pedagogy underpins teaching and coaching practice for children given that some of the current approaches underpinning children’s sport and P.E. may not be leading to optimal results. For example, little time is spent undertaking physical activities (Tinning, 2006) and much of this practice is not representative of the competition demands of the performance environment (Kirk & McPhail, 2002; Renshaw et al., 2008). Proponents of a non- linear pedagogy advocate the design of practice by applying key concepts such as the mutuality of the performer and environment, the tight coupling of perception and action, and the emergence of movement solutions due to self organisation under constraints (see Renshaw, et al., in press). As skills are shaped by the unique interacting individual, task and environmental constraints in these learning environments, small changes to individual structural (e.g. factors such as height or limb length) or functional constraints (e.g. factors such as motivation, perceptual skills, strength that can be acquired), task rules, equipment, or environmental constraints can lead to dramatic changes in movement patterns adopted by learners to solve performance problems. The aim of this chapter is to provide real life examples for teachers and coaches who wish to adopt the ideas of non- linear pedagogy in their practice. Specifically, I will provide examples related to specific issues related to individual constraints in children and in particular the unique challenges facing coaches when individual constraints are changing due to growth and development. Part two focuses on understanding how cultural environmental constraints impact on children’s sport. This is an area that has received very little attention but plays a very important part in the long- term development of sporting expertise. Finally, I will look at how coaches can manipulate task constraints to create effective learning environments for young children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, a constraints- led approach has been promoted as a framework for understanding how children and adults acquire movement skills for sport and exercise (see Davids, Button & Bennett, 2008; Araújo et al., 2004). The aim of a constraints- led approach is to identify the nature of interacting constraints that influence skill acquisition in learners. In this chapter the main theoretical ideas behind a constraints- led approach are outlined to assist practical applications by sports practitioners and physical educators in a non- linear pedagogy (see Chow et al., 2006, 2007). To achieve this goal, this chapter examines implications for some of the typical challenges facing sport pedagogists and physical educators in the design of learning programmes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite various approaches, the production of biodegradable plastics such as polyhydroxybutyrate (PHB) in transgenic plants has met with limited success due largely to low expression levels. Even in the few instances where high levels of protein expression have been reported, the transgenic plants have been stunted indicating PHB is phytotoxic (Poirier 2002). This PhD describes the application of a novel virus-based gene expression technology, termed InPAct („In Plant Activation.), for the production of PHB in tobacco and sugarcane. InPAct is based on the rolling circle replication mechanism by which circular ssDNA viruses replicate and provides a system for controlled, high-level gene expression. Based on these features, InPAct was thought to represent an ideal system to enable the controlled, high-level expression of the three phb genes (phbA, phbB and phbC) required for PHB production in sugarcane at a preferred stage of plant growth. A Tobacco yellow dwarf virus (TbYDV)-based InPAct-phbA vector, as well as linear vectors constitutively expressing phbB and phbC were constructed and different combinations were used to transform tobacco leaf discs. A total of four, eight, three and three phenotypically normal tobacco lines were generated from discs transformed with InPAct-phbA, InPAct-phbA + p1300-TaBV P-phbB/phbC- 35S T, p1300-35S P-phbA-NOS T + p1300-TaBV P-phbB/phbC-35S T and InPAct-GUS, respectively. To determine whether the InPAct cassette could be activated in the presence of the TbYDV Rep, leaf samples from the eight InPActphbA + p1300-TaBV P-phbB/phbC-35S T plants were agroinfiltrated with p1300- TbYDV-Rep/RepA. Three days later, successful activation was indicated by the detection of episomes using both PCR and Southern analysis. Leaf discs from the eight InPAct-phbA + p1300-TaBV P-phbB/phbC-35S T transgenic plant lines were agroinfiltrated with p1300-TbYDV-Rep/RepA and leaf tissue was collected ten days post-infiltration and examined for the presence of PHB granules. Confocal microscopy and TEM revealed the presence of typical PHB granules in five of the eight lines, thus demonstrating the functionality of InPActbased PHB production in tobacco. However, analysis of leaf extracts by HPLC failed to detect the presence of PHB suggesting only very low level expression levels. Subsequent molecular analysis of three lines revealed low levels of correctly processed mRNA from the catalase intron contained within the InPAct cassette and also the presence of cryptic splice sites within the intron. In an attempt to increase expression levels, new InPAct-phb cassettes were generated in which the castorbean catalase intron was replaced with a synthetic intron (syntron). Further, in an attempt to both increase and better control Rep/RepA-mediated activation of InPAct cassettes, Rep/RepA expression was placed under the control of a stably integrated alc switch. Leaf discs from a transgenic tobacco line (Alc ML) containing 35S P-AlcR-AlcA P-Rep/RepA were supertransformed with InPAct-phbAsyn or InPAct-GUSsyn using Agrobacterium and three plants (lines) were regenerated for each construct. Analysis of the RNA processing of the InPAct-phbAsyn cassette revealed highly efficient and correct splicing of the syntron, thus supporting its inclusion within the InPAct system. To determine the efficiency of the alc switch to activate InPAct, leaf material from the three Alc ML + InPAct-phbAsyn lines was either agroinfiltrated with 35S P-Rep/RepA or treated with ethanol. Unexpectedly, episomes were detected not only in the infiltrated and ethanol treated samples, but also in non-treated samples. Subsequent analysis of transgenic Alc ML + InPAct-GUS lines, confirmed that the alc switch was leaky in tissue culture. Although this was shown to be reversible once plants were removed from the tissue culture environment, it made the regeneration of Alc ML + InPAct-phbsyn plant lines extremely difficult, due to unintentional Rep expression and therefore high levels of phb expression and phytotoxic PHB production. Two Alc ML + InPAct-phbAsyn + p1300-TaBV P-phbB/phbC-35S T transgenic lines were able to be regenerated, and these were acclimatised, alcohol-treated and analysed. Although episome formation was detected as late as 21 days post activation, no PHB was detected in the leaves of any plants using either microscopy or HPLC, suggesting the presence of a corrupt InPAct-phbA cassette in both lines. The final component of this thesis involved the application of both the alc switch and the InPAct systems to sugarcane in an attempt to produce PHB. Initial experiments using transgenic Alc ML + InPAct-GUS lines indicated that the alc system was not functional in sugarcane under the conditions tested. The functionality of the InPAct system, independent of the alc gene switch, was subsequently examined by bombarding the 35S Rep/RepA cassette into leaf and immature leaf whorl cells derived from InPAct-GUS transgenic sugarcane plants. No GUS expression was observed in leaf tissue, whereas weak and irregular GUS expression was observed in immature leaf whorl tissue derived from two InPAct- GUS lines and two InPAct-GUS + 35S P-AlcR-AlcA P-GUS lines. The most plausible reason to explain the inconsistent and low levels of GUS expression in leaf whorls is a combination of low numbers of sugarcane cells in the DNA replication-conducive S-phase and the irregular and random nature of sugarcane cells bombarded with Rep/RepA. This study details the first report to develop a TbYDV-based InPAct system under control of the alc switch to produce PHB in tobacco and sugarcane. Despite the inability to detect quantifiable levels of PHB levels in either tobacco or sugarcane, the findings of this study should nevertheless assist in the further development of both the InPAct system and the alc system, particularly for sugarcane and ultimately lead to an ethanol-inducible InPAct gene expression system for the production of bioplastics and other proteins of commercial value in plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract—Computational Intelligence Systems (CIS) is one of advanced softwares. CIS has been important position for solving single-objective / reverse / inverse and multi-objective design problems in engineering. The paper hybridise a CIS for optimisation with the concept of Nash-Equilibrium as an optimisation pre-conditioner to accelerate the optimisation process. The hybridised CIS (Hybrid Intelligence System) coupled to the Finite Element Analysis (FEA) tool and one type of Computer Aided Design(CAD) system; GiD is applied to solve an inverse engineering design problem; reconstruction of High Lift Systems (HLS). Numerical results obtained by the hybridised CIS are compared to the results obtained by the original CIS. The benefits of using the concept of Nash-Equilibrium are clearly demonstrated in terms of solution accuracy and optimisation efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear (or continuous) assets are engineering infrastructure that usually spans long distances and can be divided into different segments, all of which perform the same function but may be subject to different loads and environmental factors. Typical linear assets include railway lines, roads, pipelines and cables. How and when to renew such assets are critical decisions for asset owners as they normally involves significant capital investment. Through investigating the characteristics of linear asset renewal decisions and identifying the critical requirements that are associated with renewal decisions, we present a multi-criteria decision support method to help optimise renewal decisions. A case study that concerns renewal of an economiser's tubing system is a coal-fired power station is adopted to demonstrate the application of this method. Although the paper concerns a particular linear asset decision type, the approach has broad applicability for linear asset management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite an increased focus on proactive policing in recent years, criminal investigation is still perhaps the most important task of any law enforcement agency. As a result, the skills required to carry out a successful investigation or to be an ‘effective detective’ have been subjected to much attention and debate (Smith and Flanagan, 2000; Dean, 2000; Fahsing and Gottschalk, 2008:652). Stelfox (2008:303) states that “The service’s capacity to carry out investigations comprises almost entirely the expertise of investigators.” In this respect, Dean (2000) highlighted the need to profile criminal investigators in order to promote further understanding of the cognitive approaches they take to the process of criminal investigation. As a result of his research, Dean (2000) produced a theoretical framework of criminal investigation, which included four disparate cognitive or ‘thinking styles’. These styles were the ‘Method’, ‘Challenge’, ‘Skill’ and ‘Risk’. While the Method and Challenge styles deal with adherence to Standard Operating Procedures (SOPs) and the internal ‘drive’ that keeps an investigator going, the Skill and Risk styles both tap on the concept of creativity in policing. It is these two latter styles that provide the focus for this paper. This paper presents a brief discussion on Dean’s (2000) Skill and Risk styles before giving an overview of the broader literature on creativity in policing. The potential benefits of a creative approach as well as some hurdles which need to be overcome when proposing the integration of creativity within the policing sector are then discussed. Finally, the paper concludes by proposing further research into Dean’s (2000) skill and risk styles and also by stressing the need for significant changes to the structure and approach of the traditional policing organisation before creativity in policing is given the status it deserves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional analytic models for power system fault diagnosis are usually formulated as an unconstrained 0–1 integer programming problem. The key issue of the models is to seek the fault hypothesis that minimizes the discrepancy between the actual and the expected states of the concerned protective relays and circuit breakers. The temporal information of alarm messages has not been well utilized in these methods, and as a result, the diagnosis results may be not unique and hence indefinite, especially when complicated and multiple faults occur. In order to solve this problem, this paper presents a novel analytic model employing the temporal information of alarm messages along with the concept of related path. The temporal relationship among the actions of protective relays and circuit breakers, and the different protection configurations in a modern power system can be reasonably represented by the developed model, and therefore, the diagnosed results will be more definite under different circumstances of faults. Finally, an actual power system fault was served to verify the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.