894 resultados para Conventional methods
Resumo:
Research Interests: Are parents complying with the legislation? Is this the same for urban, regional and rural parents? Indigenous parents? What difficulties do parents experience in complying? Do parents understand why the legislation was put in place? Have there been negative consequences for other organisations or sectors of the community?
Resumo:
Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods used in business process modeling do not yet consider this additional contextual information in their process designs. In this chapter, we describe our research towards innovative and advanced process modeling methods that include mechanisms to incorporate relevant contextual drivers and their impacts on business processes in process design models. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop these innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.
Resumo:
This paper reviews the current state in the application of infrared methods, particularly mid-infrared (mid-IR) and near infrared (NIR), for the evaluation of the structural and functional integrity of articular cartilage. It is noted that while a considerable amount of research has been conducted with respect to tissue characterization using mid-IR, it is almost certain that full-thickness cartilage assessment is not feasible with this method. On the contrary, the relatively more considerable penetration capacity of NIR suggests that it is a suitable candidate for full-thickness cartilage evaluation. Nevertheless, significant research is still required to improve the specificity and clinical applicability of the method if we are going to be able to use it for distinguishing between functional and dysfunctional cartilage.
Resumo:
Purpose: To compare accuracies of different methods for calculating human lens power when lens thickness is not available. Methods: Lens power was calculated by four methods. Three methods were used with previously published biometry and refraction data of 184 emmetropic and myopic eyes of 184 subjects (age range [18, 63] years, spherical equivalent range [–12.38, +0.75] D). These three methods consist of the Bennett method, which uses lens thickness, our modification of the Stenström method and the Bennett¬Rabbetts method, both of which do not require knowledge of lens thickness. These methods include c constants, which represent distances from lens surfaces to principal planes. Lens powers calculated with these methods were compared with those calculated using phakometry data available for a subgroup of 66 emmetropic eyes (66 subjects). Results: Lens powers obtained from the Bennett method corresponded well with those obtained by phakometry for emmetropic eyes, although individual differences up to 3.5D occurred. Lens powers obtained from the modified¬Stenström and Bennett¬Rabbetts methods deviated significantly from those obtained with either the Bennett method or phakometry. Customizing the c constants improved this agreement, but applying these constants to the entire group gave mean lens power differences of 0.71 ± 0.56D compared with the Bennett method. By further optimizing the c constants, the agreement with the Bennett method was within ± 1D for 95% of the eyes. Conclusion: With appropriate constants, the modified¬Stenström and Bennett¬Rabbetts methods provide a good approximation of the Bennett lens power in emmetropic and myopic eyes.
Resumo:
Background Comprehensive geriatric assessment has been shown to improve patient outcomes, but the geriatricians who deliver it are in short-supply. A web-based method of comprehensive geriatric assessment has been developed with the potential to improve access to specialist geriatric expertise. The current study aims to test the reliability and safety of comprehensive geriatric assessment performed “online” in making geriatric triage decisions. It will also explore the accuracy of the procedure in identifying common geriatric syndromes, and its cost relative to conventional “live” consultations. Methods/Design The study population will consist of 270 acutely hospitalized patients referred for geriatric consultation at three sites. Paired assessments (live and online) will be conducted by independent, blinded geriatricians and the level of agreement examined. This will be compared with the level of agreement between two independent, blinded geriatricians each consulting with the patient in person (i.e. “live”). Agreement between the triage decision from live-live assessments and between the triage decision from live-online assessments will be calculated using kappa statistics. Agreement between the online and live detection of common geriatric syndromes will also be assessed using kappa statistics. Resource use data will be collected for online and live-live assessments to allow comparison between the two procedures. Discussion If the online approach is found to be less precise than live assessment, further analysis will seek to identify patient subgroups where disagreement is more likely. This may enable a protocol to be developed that avoids unsafe clinical decisions at a distance. Trial registration Trial registration number: ACTRN12611000936921
Resumo:
During the course of several natural disasters in recent years, Twitter has been found to play an important role as an additional medium for many–to–many crisis communication. Emergency services are successfully using Twitter to inform the public about current developments, and are increasingly also attempting to source first–hand situational information from Twitter feeds (such as relevant hashtags). The further study of the uses of Twitter during natural disasters relies on the development of flexible and reliable research infrastructure for tracking and analysing Twitter feeds at scale and in close to real time, however. This article outlines two approaches to the development of such infrastructure: one which builds on the readily available open source platform yourTwapperkeeper to provide a low–cost, simple, and basic solution; and, one which establishes a more powerful and flexible framework by drawing on highly scaleable, state–of–the–art technology.
Resumo:
In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.
Resumo:
Blogs and other online platforms for personal writing such as LiveJournal have been of interest to researchers across the social sciences and humanities for a decade now. Although growth in the uptake of blogging has stalled somewhat since the heyday of blogs in the early 2000s, blogging continues to be a major genre of Internet-based communication. Indeed, at the same time that mass participation has moved on to Facebook, Twitter, and other more recent communication phenomena, what has been left behind by the wave of mass adoption is a slightly smaller but all the more solidly established blogosphere of engaged and committed participants. Blogs are now an accepted part of institutional, group, and personal communications strategies (Bruns and Jacobs, 2006); in style and substance, they are situated between the more static information provided by conventional Websites and Webpages and the continuous newsfeeds provided through Facebook and Twitter updates. Blogs provide a vehicle for authors (and their commenters) to think through given topics in the space of a few hundred to a few thousand words – expanding, perhaps, on shorter tweets, and possibly leading to the publication of more fully formed texts elsewhere. Additionally, they are also a very flexible medium: they readily provide the functionality to include images, audio, video, and other additional materials – as well as the fundamental tool of blogging, the hyperlink itself. Indeed, the role of the link in blogs and blog posts should not be underestimated. Whatever the genre and topic that individual bloggers engage in, for the most part blogging is used to provide timely updates and commentary – and it is typical for such material to link both to relevant posts made by other bloggers, and to previous posts by the present author, both to background material which provides readers with further information about the blogger’s current topic, and to news stories and articles which the blogger found interesting or worthy of critique. Especially where bloggers are part of a larger community of authors sharing similar interests or views (and such communities are often indicated by the presence of yet another type of link – in blogrolls, often in a sidebar on the blog site, which list the blogger’s friends or favourites), then, the reciprocal writing and linking of posts often constitutes an asynchronous, distributed conversation that unfolds over the course of days, weeks, and months. Research into blogs is interesting for a variety of reasons, therefore. For one, a qualitative analysis of one or several blogs can reveal the cognitive and communicative processes through which individual bloggers define their online identity, position themselves in relation to fellow bloggers, frame particular themes, topics and stories, and engage with one another’s points of view. It may also shed light on how such processes may differ across different communities of interest, perhaps in correlation with the different societal framing and valorisation of specific areas of interest, with the socioeconomic backgrounds of individual bloggers, or with other external or internal factors. Such qualitative research now looks back on a decade-long history (for key collections, see Gurak, et al., 2004; Bruns and Jacobs, 2006; also see Walker Rettberg, 2008) and has recently shifted also to specifically investigate how blogging practices differ across different cultures (Russell and Echchaibi, 2009). Other studies have also investigated the practices and motivations of bloggers in specific countries from a sociological perspective, through large-scale surveys (e.g. Schmidt, 2009). Blogs have also been directly employed within both K-12 and higher education, across many disciplines, as tools for reflexive learning and discussion (Burgess, 2006).
Resumo:
Study Design. A sheep study designed to compare the accuracy of static radiographs, dynamic radiographs, and computed tomographic (CT) scans for the assessment of thoracolumbar facet joint fusion as determined by micro-CT scanning. Objective. To determine the accuracy and reliability of conventional imaging techniques in identifying the status of thoracolumbar (T13-L1) facet joint fusion in a sheep model. Summary of Background Data. Plain radiographs are commonly used to determine the integrity of surgical arthrodesis of the thoracolumbar spine. Many previous studies of fusion success have relied solely on postoperative assessment of plain radiographs, a technique lacking sensitivity for pseudarthrosis. CT may be a more reliable technique, but is less well characterized. Methods. Eleven adult sheep were randomized to either attempted arthrodesis using autogenous bone graft and internal fixation (n = 3) or intentional pseudarthrosis (IP) using oxidized cellulose and internal fixation (n = 8). After 6 months, facet joint fusion was assessed by independent observers, using (1) plain static radiography alone, (2) additional dynamic radiographs, and (3) additional reconstructed spiral CT imaging. These assessments were correlated with high-resolution micro-CT imaging to predict the utility of the conventional imaging techniques in the estimation of fusion success. Results. The capacity of plain radiography alone to correctly predict fusion or pseudarthrosis was 43% and was not improved using plain radiography and dynamic radiography with also a 43% accuracy. Adding assessment by reformatted CT imaging to the plain radiography techniques increased the capacity to predict fusion outcome to 86% correctly. The sensitivity, specificity, and accuracy of static radiography were 0.33, 0.55, and 0.43, respectively, those of dynamic radiography were 0.46, 0.40, and 0.43, respectively, and those of radiography plus CT were 0.88, 0.85, and 0.86, respectively. Conclusion. CT-based evaluation correlated most closely with high-resolution micro-CT imaging. Neither plain static nor dynamic radiographs were able to predict fusion outcome accurately. © 2012 Lippincott Williams & Wilkins.
Resumo:
Fractional differential equations are becoming more widely accepted as a powerful tool in modelling anomalous diffusion, which is exhibited by various materials and processes. Recently, researchers have suggested that rather than using constant order fractional operators, some processes are more accurately modelled using fractional orders that vary with time and/or space. In this paper we develop computationally efficient techniques for solving time-variable-order time-space fractional reaction-diffusion equations (tsfrde) using the finite difference scheme. We adopt the Coimbra variable order time fractional operator and variable order fractional Laplacian operator in space where both orders are functions of time. Because the fractional operator is nonlocal, it is challenging to efficiently deal with its long range dependence when using classical numerical techniques to solve such equations. The novelty of our method is that the numerical solution of the time-variable-order tsfrde is written in terms of a matrix function vector product at each time step. This product is approximated efficiently by the Lanczos method, which is a powerful iterative technique for approximating the action of a matrix function by projecting onto a Krylov subspace. Furthermore an adaptive preconditioner is constructed that dramatically reduces the size of the required Krylov subspaces and hence the overall computational cost. Numerical examples, including the variable-order fractional Fisher equation, are presented to demonstrate the accuracy and efficiency of the approach.
Resumo:
The scheduling of locomotive movements on cane railways has proven to be a very complex task. Various optimisation methods have been used over the years to try and produce an optimised schedule that eliminates or minimises bin supply delays to harvesters and the factory, while minimising the number of locomotives, locomotive shifts and cane bins, and also the cane age. This paper reports on a new attempt to develop an automatic scheduler using a mathematical model solved using mixed integer programming and constraint programming approaches and blocking parallel job shop scheduling fundamentals. The model solution has been explored using conventional constraint programming search techniques and found to produce a reasonable schedule for small-scale problems with up to nine harvesters. While more effort is required to complete the development of the full model with metaheuristic search techniques, the work completed to date gives confidence that the metaheuristic techniques will provide near optimal solutions in reasonable time.
Resumo:
In this paper we extend the ideas of Brugnano, Iavernaro and Trigiante in their development of HBVM($s,r$) methods to construct symplectic Runge-Kutta methods for all values of $s$ and $r$ with $s\geq r$. However, these methods do not see the dramatic performance improvement that HBVMs can attain. Nevertheless, in the case of additive stochastic Hamiltonian problems an extension of these ideas, which requires the simulation of an independent Wiener process at each stage of a Runge-Kutta method, leads to methods that have very favourable properties. These ideas are illustrated by some simple numerical tests for the modified midpoint rule.
Resumo:
Sorghum (Sorghum bicolor (L.) Moench) is the world’s fifth major cereal crop and holds importance as a construction material, food and fodder source. More recently, the potential of this plant as a biofuel source has been noted. Despite its agronomic importance, the use of sorghum production is being constrained by both biotic and abiotic factors. These challenges could be addressed by the use of genetic engineering strategies to complement conventional breeding techniques. However, sorghum is one of the most recalcitrant crops for genetic modification with the lack of an efficient tissue culture system being amongst the chief reasons. Therefore, the aim of this study was to develop an efficient tissue culture system for establishing regenerable embryogenic cell lines, micropropagation and acclimatisation for Sorghum bicolor and use this to optimise parameters for genetic transformation via Agrobacterium-mediated transformation and microprojectile bombardment. Using five different sorghum cultivars, SA281, 296B, SC49, Wray and Rio, numerous parameters were investigated in an attempt to establish an efficient and reproducible tissue culture and transformation system. Using immature embryos (IEs) as explants, regenerable embryogenic cell lines (ECLs) could only be established from cultivars SA281 and 296B. Large amounts of phenolics were produced from IEs of cultivars, SC49, Wary and Rio, and these compounds severely hindered callus formation and development. Cultivar SA281 also produced phenolics during regeneration. Attempts to suppress the production of these compounds in cultivars SA281 and SC49 using activated charcoal, PVP, ascorbic acid, citric acid and liquid filter paper bridge methods were either ineffective or had a detrimental effect on embryogenic callus formation, development and regeneration. Immature embryos sourced during summer were found to be far more responsive in vitro than those sourced during winter. In an attempt to overcome this problem, IEs were sourced from sorghum grown under summer conditions in either a temperature controlled glasshouse or a growth chamber. However, the performance of these explants was still inferior to that of natural summer-sourced explants. Leaf whorls, mature embryos, shoot tips and leaf primordia were found to be unsuitable as explants for establishing ECLs in sorghum cultivars SA281 and 296B. Using the florets of immature inflorescences (IFs) as explants, however, ECLs were established and regenerated for these cultivars, as well as for cultivar Tx430, using callus induction media, SCIM, and regeneration media, VWRM. The best in vitro responses, from the largest possible sized IFs, were obtained using plants at the FL-2 stage (where the last fully opened leaf was two leaves away from the flag leaf). Immature inflorescences could be stored at 25oC for up to three days without affecting their in vitro responses. Compared to IEs, the IFs were more robust in tissue culture and showed responses which were season and growth condition independent. A micropropagation protocol for sorghum was developed in this study. The optimum plant growth regulator (PGR) combination for the micropropagation of in vitro regenerated plantlets was found to be 1.0 mg/L BAP in combination with 0.5 mg/L NAA. With this protocol, cultivars 296B and SA281 produced an average of 57 and 13 off-shoots per plantlet, respectively. The plantlets were successfully acclimatised and developed into phenotypically normal plants that set seeds. A simplified acclimatisation protocol for in vitro regenerated plantlets was also developed. This protocol involved deflasking in vitro plantlets with at least 2 fully-opened healthy leaves and at least 3 roots longer than 1.5 cm, washing the media from the roots with running tap water, planting in 100 mm pots and placing in plastic trays covered with a clear plastic bag in a plant growth chamber. After seven days, the corners of the plastic cover were opened and the bags were completely removed after 10 days. All plantlets were successfully acclimatised regardless of whether 1:1 perlite:potting mix, potting mix, UC mix or vermiculite were used as potting substrates. Parameters were optimised for Agrobacterium-mediated transformation (AMT) of cultivars SA281, 296B and Tx430. The optimal conditions were the use of Agrobacterium strain LBA4404 at an inoculum density of 0.5 OD600nm, heat shock at 43oC for 3 min, use of the surfactant Pluronic F-68 (0.02% w/v) in the inoculation media with a pH of 5.2 and a 3 day co-cultivation period in dark at 22oC. Using these parameters, high frequencies of transient GFP expression was observed in IEs precultured on callus initiation media for 1-7 days as well as in four weeks old IE- and IF-derived callus. Cultivar SA281 appeared very sensitive to Agrobacterium since all tissue turned necrotic within two weeks post-exposure. For cultivar 296B, GFP expression was observed up to 20 days post co-cultivation but no stably transformed plants were regenerated. Using cultivar Tx430, GFP was expressed for up to 50 days post co-cultivation. Although no stably transformed plants of this cultivar were regenerated, this was most likely due to the use of unsuitable regeneration media. Parameters were optimised for transformation by particle bombardment (PB) of cultivars SA281, 296B and Tx430. The optimal conditions were use of 3-7 days old IEs and 4 weeks old IF callus, 4 hour pre- and post-bombardment osmoticum treatment, use of 0.6 µm gold microparticles, helium pressure of 1500 kPa and target distance of 15 cm. Using these parameters for PB, transient GFP expression was observed for up to 14, 30 and 50 days for cultivars SA281, 296B and Tx430, respectively. Further, the use of PB resulted in less tissue necrosis compared to AMT for the respective cultivars. Despite the presence of transient GFP expression, no stably transformed plants were regenerated. The establishment of regenerable ECLs and the optimization of AMT and PB parameters in this study provides a platform for future efforts to develop an efficient transformation protocol for sorghum. The development of GM sorghum will be an important step towards improving its agronomic properties as well as its exploitation for biofuel production.
Resumo:
In this study we have found that NMR detectability of 39K in rat thigh muscle may be substantially higher (up to 100% oftotal tissue potassium) than values previously reported of around 40%. The signal was found to consist of two superimposed components, one broad and one narrow, of approximately equal area. Investigations involving improvements in spectral parameters such as signal-to-noise ratio and baseline roll, together with computer simulations of spectra, show that the quality of the spectra has a major effect on the amount of signal detected, which is largely due to the loss of detectability of the broad signal component. In particular, lower-field spectrometers using conventional probes and detection methods generally have poorer signal-to-noise and worse baseline roll artifacts, which make detection of a broad component of the muscle signal difficult.