969 resultados para level set method
Resumo:
This paper presents a methodology for determining the vertical hydraulic conductivity (Kv) of an aquitard, in a multilayered leaky system, based on the harmonic analysis of arbitrary water-level fluctuations in aquifers. As a result, Kv of the aquitard is expressed as a function of the phase-shift of water-level signals measured in the two adjacent aquifers. Based on this expression, we propose a robust method to calculate Kv by employing linear regression analysis of logarithm transformed frequencies and phases. The frequencies, where the Kv are calculated, are identified by coherence analysis. The proposed methods are validated by a synthetic case study and are then applied to the Westbourne and Birkhead aquitards, which form part of a five-layered leaky system in the Eromanga Basin, Australia.
Resumo:
The nature and characteristics of how learners learn today are changing. As technology use in learning and teaching continues to grow, its integration to facilitate deep learning and critical thinking becomes a primary consideration. The implications for learner use, implementation strategies, design of integration frameworks and evaluation of their effectiveness in learning environments cannot be overlooked. This study specifically looked at the impact that technology-enhanced learning environments have on different learners’ critical thinking in relation to eductive ability, technological self-efficacy, and approaches to learning and motivation in collaborative groups. These were explored within an instructional design framework called CoLeCTTE (collaborative learning and critical thinking in technology-enhanced environments) which was proposed, revised and used across three cases. The field of investigation was restricted to three key questions: 1) Do learner skill bases (learning approach and eductive ability) influence critical thinking within the proposed CoLeCTTE framework? If so, how?; 2) Do learning technologies influence the facilitation of deep learning and critical thinking within the proposed CoLeCTTE framework? If so, how?; and 3) How might learning be designed to facilitate the acquisition of deep learning and critical thinking within a technology-enabled collaborative environment? The rationale, assumptions and method of research for using a mixed method and naturalistic case study approach are discussed; and three cases are explored and analysed. The study was conducted at the tertiary level (undergraduate and postgraduate) where participants were engaged in critical technical discourse within their own disciplines. Group behaviour was observed and coded, attributes or skill bases were measured, and participants interviewed to acquire deeper insights into their experiences. A progressive case study approach was used, allowing case investigation to be implemented in a "ladder-like" manner. Cases 1 and 2 used the proposed CoLeCTTE framework with more in-depth analysis conducted for Case 2 resulting in a revision of the CoLeCTTE framework. Case 3 used the revised CoLeCTTE framework and in-depth analysis was conducted. The findings led to the final version of the framework. In Cases 1, 2 and 3, content analysis of group work was conducted to determine critical thinking performance. Thus, the researcher used three small groups where learner skill bases of eductive ability, technological self-efficacy, and approaches to learning and motivation were measured. Cases 2 and 3 participants were interviewed and observations provided more in-depth analysis. The main outcome of this study is analysis of the nature of critical thinking within collaborative groups and technology-enhanced environments positioned in a theoretical instructional design framework called CoLeCTTE. The findings of the study revealed the importance of the Achieving Motive dimension of a student’s learning approach and how direct intervention and strategies can positively influence critical thinking performance. The findings also identified factors that can adversely affect critical thinking performance and include poor learning skills, frustration, stress and poor self-confidence, prioritisations over learning; and inadequate appropriation of group role and tasks. These findings are set out as instructional design guidelines for the judicious integration of learning technologies into learning and teaching practice for higher education that will support deep learning and critical thinking in collaborative groups. These guidelines are presented in two key areas: technology and tools; and activity design, monitoring, control and feedback.
Resumo:
Background: Medication remains the cornerstone treatment for mental illness. Cognition is one of the strongest predictors of non-adherence. The aim of this preliminary investigation was to examine the association between the Large Allen Cognitive Level Screen (LACLS) and medication adherence among a small sample of mental health service users to determine whether the LACLS has potential as a screening tool for capacity to manage medication regimens. Method: Demographic and clinical information was collected from a small sample of people who had recently accessed community mental health services. Participants then completed the LACLS and the Medication Adherence Rating Scale (MARS) at a single time point. The strength of association between the LACLS and MARS was examined using Spearman rank-order correlation. Results: A strong positive correlation between the LACLS and medication adherence (r = 0.71, p = 0.01) was evident. No participants reported the use of medication aids despite evidence of impaired cognitive functioning. Conclusion: This investigation has provided the first empirical evidence indicating that the LACLS may have utility as a screening instrument for capacity to manage medication adherence among this population. While promising, this finding should be interpreted with caveats given its preliminary nature.
Resumo:
High-performance liquid chromatography coupled with solid phase extraction method was developed for determination of isofraxidin in rat plasma after oral administration of Acanthopanax senticosus extract (ASE), and pharmacokinetic parameters of isofraxidin either in ASE or pure compound were measured. The HPLC analysis was performed on a Dikma Diamonsil RP(18) column (4.6 mm x 150 mm, 5 microm) with the isocratic elution of solvent A (acetonitrile) and solvent B (0.1% aqueous phosphoric acid, v/v) (A : B = 22 : 78) and the detection wavelength was set at 343 nm. The calibration curve was linear over the range of 0.156-15.625 microg/ml. The limit of detection was 60 ng/ml. The intra-day precision was 5.8%, and the inter-day precision was 6.0%. The recovery was 87.30+/-1.73%. When the dosage of ASE is equal to pure compound caculated by the amount of isofraxidin, it has been found to have two maximum concentrations in plasma while the pure compound only showed one peak in the plasma concentration-time curve. The determined content of isofraxidin in plasma after oral administration of ASE is the total contents of free isofraxidin and its precursors in ASE in vitro. The pharmacokinetic characteristics of ASE showed the priority of the extract and the properities of traditional Chinese medicine.
Resumo:
Background: Random Breath Testing (RBT) is the main drink driving law enforcement tool used throughout Australia. International comparative research considers Australia to have the most successful RBT program compared to other countries in terms of crash reductions (Erke, Goldenbeld, & Vaa, 2009). This success is attributed to the programs high intensity (Erke et al., 2009). Our review of the extant literature suggests that there is no research evidence that indicates an optimal level of alcohol breath testing. That is, we suggest that no research exists to guide policy regarding whether or not there is a point at which alcohol related crashes reach a point of diminishing returns as a result of either saturated or targeted RBT testing. Aims: In this paper we first provide an examination of RBTs and alcohol related crashes across Australian jurisdictions. We then address the question of whether or not an optimal level of random breath testing exists by examining the relationship between the number of RBTs conducted and the occurrence of alcohol-related crashes over time, across all Australian states. Method: To examine the association between RBT rates and alcohol related crashes and to assess whether an optimal ratio of RBT tests per licenced drivers can be determined we draw on three administrative data sources form each jurisdiction. Where possible data collected spans January 1st 2000 to September 30th 2012. The RBT administrative dataset includes the number of Random Breath Tests (RBTs) conducted per month. The traffic crash administrative dataset contains aggregated monthly count of the number of traffic crashes where an individual’s recorded BAC reaches or exceeds 0.05g/ml of alcohol in blood. The licenced driver data were the monthly number of registered licenced drivers spanning January 2000 to December 2011. Results: The data highlights that the Australian story does not reflective of all States and territories. The stable RBT to licenced driver ratio in Queensland (of 1:1) suggests a stable rate of alcohol related crash data of 5.5 per 100,000 licenced drivers. Yet, in South Australia were a relative stable rate of RBT to licenced driver ratio of 1:2 is maintained the rate of alcohol related traffic crashes is substantially less at 3.7 per 100,000. We use joinpoint regression techniques and varying regression models to fit the data and compare the different patterns between jurisdictions. Discussion: The results of this study provide an updated review and evaluation of RBTs conducted in Australia and examines the association between RBTs and alcohol related traffic crashes. We also present an evidence base to guide policy decisions for RBT operations.
Resumo:
Background Hyperhomocysteinemia as a consequence of the MTHFR 677 C > T variant is associated with cardiovascular disease and stroke. Another factor that can potentially contribute to these disorders is a depleted nitric oxide level, which can be due to the presence of eNOS +894 G > T and eNOS −786 T > C variants that make an individual more susceptible to endothelial dysfunction. A number of genotyping methods have been developed to investigate these variants. However, simultaneous detection methods using polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) analysis are still lacking. In this study, a novel multiplex PCR-RFLP method for the simultaneous detection of MTHFR 677 C > T and eNOS +894 G > T and eNOS −786 T > C variants was developed. A total of 114 healthy Malay subjects were recruited. The MTHFR 677 C > T and eNOS +894 G > T and eNOS −786 T > C variants were genotyped using the novel multiplex PCR-RFLP and confirmed by DNA sequencing as well as snpBLAST. Allele frequencies of MTHFR 677 C > T and eNOS +894 G > T and eNOS −786 T > C were calculated using the Hardy Weinberg equation. Methods The 114 healthy volunteers were recruited for this study, and their DNA was extracted. Primer pair was designed using Primer 3 Software version 0.4.0 and validated against the BLAST database. The primer specificity, functionality and annealing temperature were tested using uniplex PCR methods that were later combined into a single multiplex PCR. Restriction Fragment Length Polymorphism (RFLP) was performed in three separate tubes followed by agarose gel electrophoresis. PCR product residual was purified and sent for DNA sequencing. Results The allele frequencies for MTHFR 677 C > T were 0.89 (C allele) and 0.11 (T allele); for eNOS +894 G > T, the allele frequencies were 0.58 (G allele) and 0.43 (T allele); and for eNOS −786 T > C, the allele frequencies were 0.87 (T allele) and 0.13 (C allele). Conclusions Our PCR-RFLP method is a simple, cost-effective and time-saving method. It can be used to successfully genotype subjects for the MTHFR 677 C > T and eNOS +894 G > T and eNOS −786 T > C variants simultaneously with 100% concordance from DNA sequencing data. This method can be routinely used for rapid investigation of the MTHFR 677 C > T and eNOS +894 G > T and eNOS −786 T > C variants.
Resumo:
Tag recommendation is a specific recommendation task for recommending metadata (tag) for a web resource (item) during user annotation process. In this context, sparsity problem refers to situation where tags need to be produced for items with few annotations or for user who tags few items. Most of the state of the art approaches in tag recommendation are rarely evaluated or perform poorly under this situation. This paper presents a combined method for mitigating sparsity problem in tag recommendation by mainly expanding and ranking candidate tags based on similar items’ tags and existing tag ontology. We evaluated the approach on two public social bookmarking datasets. The experiment results show better accuracy for recommendation in sparsity situation over several state of the art methods.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
The measurement of losses in high efficiency / high power converters is difficult. Measuring the losses directly from the difference between the input and output power results in large errors. Calorimetric methods are usually used to bypass this issue but introduce different problems, such as, long measurement times, limited power loss measurement range and/or large set up cost. In this paper the total losses of a converter are measured directly and switching losses are exacted. The measurements can be taken with only three multimeters and a current probe and a standard bench power supply. After acquiring two or three power loss versus output current sweeps, a series of curve fitting processes are applied and the switching losses extracted.
Resumo:
Aim Collisions between trains and pedestrians are the most likely to result in severe injuries and fatalities when compared to other types of rail crossing accidents. Currently, there is a growing emphasis towards developing effective interventions designed to reduce the prevalence of train–pedestrian collisions. This paper reviews what is currently known regarding the personal and environmental factors that contribute to train–pedestrian collisions, particularly among high-risk groups. Method Studies that reported on the prevalence and characteristics of pedestrian accidents at railway crossings up until June 2012 were searched in electronic databases. Results Males, school children and older pedestrians (and those with disabilities) are disproportionately represented in fatality databases. However, a main theme to emerge is that little is known about the origins of train–pedestrian collisions (especially compared to train–vehicle collisions). In particular, whether collisions result from engaging in deliberate violations versus making decisional errors. This subsequently limits the corresponding development of effective and targeted interventions for high-risk groups as well as crossing locations. Finally, it remains unclear what combination of surveillance and deterrence-based and education-focused campaigns are required to produce lasting reductions in train–pedestrian fatality rates. This paper provides direction for future research into the personal and environmental origins of collisions as well as the development of interventions that aim to attract pedestrians’ attention and ensure crossing rules are respected.
Resumo:
Every year a number of pedestrians are struck by trains resulting in death and serious injury. While much research has been conducted on train-vehicle collisions, very little is currently known about the aetiology of train-pedestrian collisions. To date, scant research has been undertaken to investigate the demographics of rule breakers, the frequency of deliberate violation versus error making and the influence of the classic deterrence approach on subsequent behaviours. Aim This study aimed to to identify pedestrians’ self-reported reasons for engaging in violations at crossing, the frequency and nature of rule breaking and whether the threat of sanctions influence such events. Method A questionnaire was administered to 511 participants of all ages. Results Analysis revealed that pedestrians (particularly younger groups) were more likely to commit deliberate violations rather than make crossing errors e.g., mistakes. The most frequent reasons given for deliberate violations were participants were running late and did not want to miss their train or participants believed that the gate was taking too long to open so may be malfunctioning. In regards to classical deterrence, an examination of the perceived threat of being apprehended and fined for a crossing violation revealed participants reported the highest mean scores for swiftness of punishment, which suggests they were generally aware that they would receive an “on the spot” fine. However, the overall mean scores for certainty and severity of sanctions (for violating the rules) indicate that the participants did not perceive the certainty and severity of sanctions as very high. This paper will further discuss the research findings in regards to the development of interventions designed to improve pedestrian crossing safety.
Resumo:
Collisions among trains and cars at road/rail level crossings (LXs) can have severe consequences such as high level of fatalities, injuries and significant financial losses. As communication and positioning technologies have significantly advanced, implementing vehicular ad hoc networks (VANETs) in the vicinity of unmanned LXs, generally LXs without barriers, is seen as an efficient and effective approach to mitigate or even eliminate collisions without imposing huge infrastructure costs. VANETs necessitate unique communication strategies, in which routing protocols take a prominent part in their scalability and overall performance, through finding optimised routes quickly and with low bandwidth overheads. This article studies a novel geo-multicast framework that incorporates a set of models for communication, message flow and geo-determination of endangered vehicles with a reliable receiver-based geo-multicast protocol to support cooperative level crossings (CLXs), which provide collision warnings to the endangered motorists facing road/rail LXs without barriers. This framework is designed and studied as part of a $5.5 m Government and industry funded project, entitled 'Intelligent-Transport-Systems to improve safety at road/rail crossings'. Combined simulation and experimental studies of the proposed geo-multicast framework have demonstrated promising outcomes as cooperative awareness messages provide actionable critical information to endangered drivers who are identified by CLXs.
Resumo:
An HPLC with SPE method has been developed for analysis of constituents in rat blood after oral administration of the extract of Acanthopanax senticosus (ASE). The plasma sample was prepared by SPE method equipped with Oasis HLB cartridge (3cc, 60 mg). The analysis was performed on a Dikma Diamonsil RP(18) column (4.6 mmx150 mm, 5 microm) with the gradient elution of solvent A (ACN) and solvent B (0.1% aqueous phosphoric acid, v/v) and the detection wavelength was set at 270 nm. The calibration curve was linear over the range of 0.156-15.625 microg/mL. The LOD was 60 ng/mL. The intraday precision was less than 5.80%, and the interday precision was less than 6.0%. The recovery was (87.30 +/- 1.73)%. As a result, 19 constituents were detected in rat plasma after oral administration of the ASE, including 11 original compounds in ASE and eight metabolites, and three of the metabolites originated from syringin in ASE. Six constituents were identified by comparing with the corresponding reference compounds.
Resumo:
We compare the consistency of choices in two methods used to elicit risk preferences on an aggregate as well as on an individual level. We ask subjects to choose twice from a list of nine decisions between two lotteries, as introduced by Holt and Laury (2002, 2005) alternating with nine decisions using the budget approach introduced by Andreoni and Harbaugh (2009). We find that, while on an aggregate (subject pool) level the results are consistent, on an individual (within-subject) level, behaviour is far from consistent. Within each method as well as across methods we observe low (simple and rank) correlations.
Resumo:
Articular cartilage is the load-bearing tissue that consists of proteoglycan macromolecules entrapped between collagen fibrils in a three-dimensional architecture. To date, the drudgery of searching for mathematical models to represent the biomechanics of such a system continues without providing a fitting description of its functional response to load at micro-scale level. We believe that the major complication arose when cartilage was first envisaged as a multiphasic model with distinguishable components and that quantifying those and searching for the laws that govern their interaction is inadequate. To the thesis of this paper, cartilage as a bulk is as much continuum as is the response of its components to the external stimuli. For this reason, we framed the fundamental question as to what would be the mechano-structural functionality of such a system in the total absence of one of its key constituents-proteoglycans. To answer this, hydrated normal and proteoglycan depleted samples were tested under confined compression while finite element models were reproduced, for the first time, based on the structural microarchitecture of the cross-sectional profile of the matrices. These micro-porous in silico models served as virtual transducers to produce an internal noninvasive probing mechanism beyond experimental capabilities to render the matrices micromechanics and several others properties like permeability, orientation etc. The results demonstrated that load transfer was closely related to the microarchitecture of the hyperelastic models that represent solid skeleton stress and fluid response based on the state of the collagen network with and without the swollen proteoglycans. In other words, the stress gradient during deformation was a function of the structural pattern of the network and acted in concert with the position-dependent compositional state of the matrix. This reveals that the interaction between indistinguishable components in real cartilage is superimposed by its microarchitectural state which directly influences macromechanical behavior.