217 resultados para Solving Equations
Resumo:
We derive a semianalytical model to describe the interaction of a single photon emitter and a collection of arbitrarily shaped metal nanoparticles. The theory treats the metal nanoparticles classically within the electrostatic eigenmode method, wherein the surface plasmon resonances of collections of nanoparticles are represented by the hybridization of the plasmon modes of the noninteracting particles. The single photon emitter is represented by a quantum mechanical two-level system that exhibits line broadening due to a finite spontaneous decay rate. Plasmon-emitter coupling is described by solving the resulting Bloch equations. We illustrate the theory by studying model systems consisting of a single emitter coupled to one, two, and three nanoparticles, and we also compare the predictions of our model to published experimental data. ©2012 American Physical Society.
Resumo:
Finding an appropriate linking method to connect different dimensional element types in a single finite element model is a key issue in the multi-scale modeling. This paper presents a mixed dimensional coupling method using multi-point constraint equations derived by equating the work done on either side of interface connecting beam elements and shell elements for constructing a finite element multiscale model. A typical steel truss frame structure is selected as case example and the reduced scale specimen of this truss section is then studied in the laboratory to measure its dynamic and static behavior in global truss and local welded details while the different analytical models are developed for numerical simulation. Comparison of dynamic and static response of the calculated results among different numerical models as well as the good agreement with those from experimental results indicates that the proposed multi-scale model is efficient and accurate.
Resumo:
A vertex-centred finite volume method (FVM) for the Cahn-Hilliard (CH) and recently proposed Cahn-Hilliard-reaction (CHR) equations is presented. Information at control volume faces is computed using a high-order least-squares approach based on Taylor series approximations. This least-squares problem explicitly includes the variational boundary condition (VBC) that ensures that the discrete equations satisfy all of the boundary conditions. We use this approach to solve the CH and CHR equations in one and two dimensions and show that our scheme satisfies the VBC to at least second order. For the CH equation we show evidence of conservative, gradient stable solutions, however for the CHR equation, strict gradient-stability is more challenging to achieve.
Resumo:
Problems involving the solution of advection-diffusion-reaction equations on domains and subdomains whose growth affects and is affected by these equations, commonly arise in developmental biology. Here, a mathematical framework for these situations, together with methods for obtaining spatio-temporal solutions and steady states of models built from this framework, is presented. The framework and methods are applied to a recently published model of epidermal skin substitutes. Despite the use of Eulerian schemes, excellent agreement is obtained between the numerical spatio-temporal, numerical steady state, and analytical solutions of the model.
Resumo:
Based on the eigen crack opening displacement (COD) boundary integral equations, a newly developed computational approach is proposed for the analysis of multiple crack problems. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix. The interactions among cracks are dealt with by two parts according to the distances of cracks to the current crack. The strong effects of cracks in adjacent group are treated with the aid of the local Eshelby matrix derived from the traction BIEs in discrete form. While the relatively week effects of cracks in far-field group are treated in the iteration procedures. Numerical examples are provided for the stress intensity factors of multiple cracks, up to several thousands in number, with the proposed approach. By comparing with the analytical solutions in the literature as well as solutions of the dual boundary integral equations, the effectiveness and the efficiencies of the proposed approach are verified.
Resumo:
In the context of ambiguity resolution (AR) of Global Navigation Satellite Systems (GNSS), decorrelation among entries of an ambiguity vector, integer ambiguity search and ambiguity validations are three standard procedures for solving integer least-squares problems. This paper contributes to AR issues from three aspects. Firstly, the orthogonality defect is introduced as a new measure of the performance of ambiguity decorrelation methods, and compared with the decorrelation number and with the condition number which are currently used as the judging criterion to measure the correlation of ambiguity variance-covariance matrix. Numerically, the orthogonality defect demonstrates slightly better performance as a measure of the correlation between decorrelation impact and computational efficiency than the condition number measure. Secondly, the paper examines the relationship of the decorrelation number, the condition number, the orthogonality defect and the size of the ambiguity search space with the ambiguity search candidates and search nodes. The size of the ambiguity search space can be properly estimated if the ambiguity matrix is decorrelated well, which is shown to be a significant parameter in the ambiguity search progress. Thirdly, a new ambiguity resolution scheme is proposed to improve ambiguity search efficiency through the control of the size of the ambiguity search space. The new AR scheme combines the LAMBDA search and validation procedures together, which results in a much smaller size of the search space and higher computational efficiency while retaining the same AR validation outcomes. In fact, the new scheme can deal with the case there are only one candidate, while the existing search methods require at least two candidates. If there are more than one candidate, the new scheme turns to the usual ratio-test procedure. Experimental results indicate that this combined method can indeed improve ambiguity search efficiency for both the single constellation and dual constellations respectively, showing the potential for processing high dimension integer parameters in multi-GNSS environment.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.
Resumo:
Objectives: To compare measures of fat-free mass (FFM) by three different bioelectrical impedance analysis (BIA) devices and to assess the agreement between three different equations validated in older adult and/or overweight populations. Design: Cross-sectional study. Setting: Orthopaedics ward of Brisbane public hospital, Australia. Participants: Twenty-two overweight, older Australians (72 yr ± 6.4, BMI 34 kg/m2 ± 5.5) with knee osteoarthritis. Measurements: Body composition was measured using three BIA devices: Tanita 300-GS (foot-to-foot), Impedimed DF50 (hand-to-foot) and Impedimed SFB7 (bioelectrical impedance spectroscopy (BIS)). Three equations for predicting FFM were selected based on their ability to be applied to an older adult and/ or overweight population. Impedance values were extracted from the hand-to-foot BIA device and included in the equations to estimate FFM. Results: The mean FFM measured by BIS (57.6 kg ± 9.1) differed significantly from those measured by foot-to-foot (54.6 kg ± 8.7) and hand-to-foot BIA (53.2 kg ± 10.5) (P < 0.001). The mean ± SD FFM predicted by three equations using raw data from hand-to-foot BIA were 54.7 kg ± 8.9, 54.7 kg ± 7.9 and 52.9 kg ± 11.05 respectively. These results did not differ from the FFM predicted by the hand-to-foot device (F = 2.66, P = 0.118). Conclusions: Our results suggest that foot-to-foot and hand-to-foot BIA may be used interchangeably in overweight older adults at the group level but due to the large limits of agreement may lead to unacceptable error in individuals. There was no difference between the three prediction equations however these results should be confirmed within a larger sample and against a reference standard.
Resumo:
Women are underrepresented in science, technology, engineering and mathematics (STEM) areas in university settings; however this may be the result of attitude rather than aptitude. There is widespread agreement that quantitative problem-solving is essential for graduate competence and preparedness in science and other STEM subjects. The research question addresses the identities and transformative experiences (experiential, perception, & motivation) of both male and female university science students in quantitative problem solving. This study used surveys to investigate first-year university students’ (231 females and 198 males) perceptions of their quantitative problem solving. Stata (statistical analysis package version 11) analysed gender differences in quantitative problem solving using descriptive and inferential statistics. Males perceived themselves with a higher mathematics identity than females. Results showed that there was statistical significance (p<0.05) between the genders on 21 of the 30 survey items associated with transformative experiences. Males appeared to have a willingness to be involved in quantitative problem solving outside their science coursework requirements. Positive attitudes towards STEM-type subjects may need to be nurtured in females before arriving in the university setting (e.g., high school or earlier). Females also need equitable STEM education opportunities such as conversations or activities outside school with family and friends to develop more positive attitudes in these fields.
Resumo:
Universities often struggle to satisfy students’ need for feedback. This is an area where student satisfaction with courses of study can be low. Yet it is clear that one of the properties of good teaching is giving the highest quality feedback on student work. The term ‘feedback’ though is most commonly associated with summative assessment given by a teacher after work is completed. The student can often be a passive participant in the process. This paper looks at the implementation of a web based interactive scenario completed by students prior to summative assessment. It requires students to participate actively to develop and improve their legal problem solving skills. Traditional delivery of legal education focuses on print and an instructor who conveys the meaning of the written word to students. Today, mixed modes of teaching are often preferred and they can provide enhanced opportunities for feeding forward with greater emphasis on what students do. Web based activities allow for flexible delivery; they are accessible off campus, at a time that suits the student and may be completed by students at their own pace. This paper reports on an online interactive activity which provides valuable formative feedback necessary to allow for successful completion of a final problem solving assignment. It focuses on how the online activity feeds forward and contributes to the development of legal problem solving skills. Introduction to Law is a unit designed and introduced for completion by undergraduate students from faculties other than law but is focused most particularly on students enrolled in the Bachelor of Entertainment Industries degree, a joint initiative of the faculties of Creative Industries, Business and Law at the Queensland University of Technology in Australia. The final (and major) assessment for the unit is an assignment requiring students to explain the legal consequences of particular scenarios. A number of cost effective web based interactive scenarios have been developed to support the unit’s classroom activities. The tool commences with instruction on problem solving method. Students then view the stimulus which is a narrative produced in the form of a music video clip. A series of questions are posed which guide students through the process and they can compare their responses with sample answers provided. The activity clarifies the problem solving method and expectations for the summative assessment and allows students to practise the skill. The paper reports on the approach to teaching and learning taken in the unit including the design process and implementation of the activity. It includes an evaluation of the activity with respect to its effectiveness as a tool to feed forward and reflects on the implications for the teaching of law in higher education.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
This article focuses on problem solving activities in a first grade classroom in a typical small community and school in Indiana. But, the teacher and the activities in this class were not at all typical of what goes on in most comparable classrooms; and, the issues that will be addressed are relevant and important for students from kindergarten through college. Can children really solve problems that involve concepts (or skills) that they have not yet been taught? Can children really create important mathematical concepts on their own – without a lot of guidance from teachers? What is the relationship between problem solving abilities and the mastery of skills that are widely regarded as being “prerequisites” to such tasks?Can primary school children (whose toolkits of skills are limited) engage productively in authentic simulations of “real life” problem solving situations? Can three-person teams of primary school children really work together collaboratively, and remain intensely engaged, on problem solving activities that require more than an hour to complete? Are the kinds of learning and problem solving experiences that are recommended (for example) in the USA’s Common Core State Curriculum Standards really representative of the kind that even young children encounter beyond school in the 21st century? … This article offers an existence proof showing why our answers to these questions are: Yes. Yes. Yes. Yes. Yes. Yes. And: No. … Even though the evidence we present is only intended to demonstrate what’s possible, not what’s likely to occur under any circumstances, there is no reason to expect that the things that our children accomplished could not be accomplished by average ability children in other schools and classrooms.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focused recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the "best" choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy.
Resumo:
In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type.