945 resultados para Experimental testing
Resumo:
Background In the emergency department, portable point-of-care testing (POCT) coagulation devices may facilitate stroke patient care by providing rapid International Normalized Ratio (INR) measurement. The objective of this study was to evaluate the reliability, validity, and impact on clinical decision-making of a POCT device for INR testing in the setting of acute ischemic stroke (AIS). Methods A total of 150 patients (50 healthy volunteers, 51 anticoagulated patients, 49 AIS patients) were assessed in a tertiary care facility. The INR's were measured using the Roche Coaguchek S and the standard laboratory technique. Results The interclass correlation coefficient and 95% confidence interval between overall POCT device and standard laboratory value INRs was high (0.932 (0.69 - 0.78). In the AIS group alone, the correlation coefficient and 95% CI was also high 0.937 (0.59 - 0.74) and diagnostic accuracy of the POCT device was 94%. Conclusions When used by a trained health professional in the emergency department to assess INR in acute ischemic stroke patients, the CoaguChek S is reliable and provides rapid results. However, as concordance with laboratory INR values decreases with higher INR values, it is recommended that with CoaguChek S INRs in the > 1.5 range, a standard laboratory measurement be used to confirm the results.
Resumo:
In this paper, a novel 2×2 multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) testbed based on an Analog Devices AD9361 highly integrated radio frequency (RF) agile transceiver was specifically implemented for the purpose of estimating and analyzing MIMO-OFDM channel capacity in vehicle-to-infrastructure (V2I) environments using the 920 MHz industrial, scientific, and medical (ISM) band. We implemented two-dimensional discrete cosine transform-based filtering to reduce the channel estimation errors and show its effectiveness on our measurement results. We have also analyzed the effects of channel estimation error on the MIMO channel capacity by simulation. Three different scenarios of subcarrier spacing were investigated which correspond to IEEE 802.11p, Long-Term Evolution (LTE), and Digital Video Broadcasting Terrestrial (DVB-T)(2k) standards. An extensive MIMO-OFDM V2I channel measurement campaign was performed in a suburban environment. Analysis of the measured MIMO channel capacity results as a function of the transmitter-to-receiver (TX-RX) separation distance up to 250 m shows that the variance of the MIMO channel capacity is larger for the near-range line-of-sight (LOS) scenarios than for the long-range non-LOS cases, using a fixed receiver signal-to-noise ratio (SNR) criterion. We observed that the largest capacity values were achieved at LOS propagation despite the common assumption of a degenerated MIMO channel in LOS. We consider that this is due to the large angular spacing between MIMO subchannels which occurs when the receiver vehicle rooftop antennas pass by the fixed transmitter antennas at close range, causing MIMO subchannels to be orthogonal. In addition, analysis on the effects of different subcarrier spacings on MIMO-OFDM channel capacity showed negligible differences in mean channel capacity for the subcarrier spacing range investigated. Measured channels described in this paper are available on request.
Resumo:
Increased permeability of blood vessels is an indicator for various injuries and diseases, including multiple sclerosis (MS), of the central nervous system. Nanoparticles have the potential to deliver drugs locally to sites of tissue damage, reducing the drug administered and limiting associated side effects, but efficient accumulation still remains a challenge. We developed peptide-functionalized polymeric nanoparticles to target blood clots and the extracellular matrix molecule nidogen, which are associated with areas of tissue damage. Using the induction of experimental autoimmune encephalomyelitis in rats to provide a model of MS associated with tissue damage and blood vessel lesions, all targeted nanoparticles were delivered systemically. In vivo data demonstrates enhanced accumulation of peptide functionalized nanoparticles at the injury site compared to scrambled and naive controls, particularly for nanoparticles functionalized to target fibrin clots. This suggests that further investigations with drug laden, peptide functionalized nanoparticles might be of particular interest in the development of treatment strategies for MS.
Resumo:
This article considers the origins and the development of the defence of experimental use in patent law - the ’freedom to tinker'. It explores the impact of such an exemption upon a number of important industries - such as agriculture, biotechnology, and pharmaceutical drugs. This article takes a comparative approach in its analysis of patent law and experimental use. It highlights the competing norms, and lack of harmonization between a number of jurisdictions - including the United States, the European Union, and Australia. Section 2 provides a critique of the development of the common law defence of experimental use in the United States. It considers a series of precedents - including Roche Products Inc v Bolar Pharmaceuticals, Madey v Duke University, Integra Lifesciences I Ltd v Merck KgaA, and Applera v MJ Research. Section 3 explores the operation of patent law and experimental use in European jurisdictions. It looks at a number of significant precedents in the United Kingdom, the Netherlands, France, Italy, and Germany. Section 4 considers the policy debate in a number of forums over the defence of experimental use in Australia. It examines the controversy over Genetic Technologies Limited asking research organisations to obtain a licence in respect of its patents associated with non-coding DNA and genomic mapping. It also considers the inquiries of the Australian Law Reform Commission and the Advisory Council on Intellectual Property, as well as the impact of the TRIPS Agreement and the Australia-United States Free Trade Agreement. The conclusion contends that there is a need for a broad-based defence of experimental use for all the member states of the Organisation for Economic Co-operation and Development.
Resumo:
Mode indicator functions (MIFs) are used in modal testing and analysis as a means of identifying modes of vibration, often as a precursor to modal parameter estimation. Various methods have been developed since the MIF was introduced four decades ago. These methods are quite useful in assisting the analyst to identify genuine modes and, in the case of the complex mode indicator function, have even been developed into modal parameter estimation techniques. Although the various MIFs are able to indicate the existence of a mode, they do not provide the analyst with any descriptive information about the mode. This paper uses the simple summation type of MIF to develop five averaged and normalised MIFs that will provide the analyst with enough information to identify whether a mode is longitudinal, vertical, lateral or torsional. The first three functions, termed directional MIFs, have been noted in the literature in one form or another; however, this paper introduces a new twist on the MIF by introducing two MIFs, termed torsional MIFs, that can be used by the analyst to identify torsional modes and, moreover, can assist in determining whether the mode is of a pure torsion or sway type (i.e., having a rigid cross-section) or a distorted twisting type. The directional and torsional MIFs are tested on a finite element model based simulation of an experimental modal test using an impact hammer. Results indicate that the directional and torsional MIFs are indeed useful in assisting the analyst to identify whether a mode is longitudinal, vertical, lateral, sway, or torsion.
Resumo:
Background The aim of this study was to compare through surface electromyographic (sEMG) recordings of the maximum voluntary contraction (MVC) on dry land and in water by manual muscle test (MMT). Method Sixteen healthy right-handed subjects (8 males and 8 females) participated in measurement of muscle activation of the right shoulder. The selected muscles were the cervical erector spinae, trapezius, pectoralis, anterior deltoid, middle deltoid, infraspinatus and latissimus dorsi. The MVC test conditions were random with respect to the order on the land/in water. Results For each muscle, the MVC test was performed and measured through sEMG to determine differences in muscle activation in both conditions. For all muscles except the latissimus dorsi, no significant differences were observed between land and water MVC scores (p = 0.063–0.679) and precision (%Diff = 7–10%) were observed between MVC conditions in the muscles trapezius, anterior deltoid and middle deltoid. Conclusions If the procedure for data collection is optimal, under MMT conditions it appears that comparable MVC sEMG values were achieved on land and in water and the integrity of the EMG recordings were maintained during wáter immersion.
Resumo:
The phase relations have been investigated experimentally at 200 and 500 MPa as a function of water activity for one of the least evolved (Indian Batt Rhyolite) and of a more evolved rhyolite composition (Cougar Point Tuff XV) from the 12·8-8·1 Ma Bruneau-Jarbidge eruptive center of the Yellowstone hotspot. Particular priority was given to accurate determination of the water content of the quenched glasses using infrared spectroscopic techniques. Comparison of the composition of natural and experimentally synthesized phases confirms that high temperatures (>900°C) and extremely low melt water contents (<1·5 wt % H₂O) are required to reproduce the natural mineral assemblages. In melts containing 0·5-1·5 wt % H₂O, the liquidus phase is clinopyroxene (excluding Fe-Ti oxides, which are strongly dependent on fO₂), and the liquidus temperature of the more evolved Cougar Point Tuff sample (BJR; 940-1000°C) is at least 30°C lower than that of the Indian Batt Rhyolite lava sample (IBR2; 970-1030°C). For the composition BJR, the comparison of the compositions of the natural and experimental glasses indicates a pre-eruptive temperature of at least 900°C. The composition of clinopyroxene and pigeonite pairs can be reproduced only for water contents below 1·5 wt % H₂O at 900°C, or lower water contents if the temperature is higher. For the composition IBR2, a minimum temperature of 920°C is necessary to reproduce the main phases at 200 and 500 MPa. At 200 MPa, the pre-eruptive water content of the melt is constrained in the range 0·7-1·3 wt % at 950°C and 0·3-1·0 wt % at 1000°C. At 500 MPa, the pre-eruptive temperatures are slightly higher (by 30-50°C) for the same ranges of water concentration. The experimental results are used to explore possible proxies to constrain the depth of magma storage. The crystallization sequence of tectosilicates is strongly dependent on pressure between 200 and 500 MPa. In addition, the normative Qtz-Ab-Or contents of glasses quenched from melts coexisting with quartz, sanidine and plagioclase depend on pressure and melt water content, assuming that the normative Qtz and Ab/Or content of such melts is mainly dependent on pressure and water activity, respectively. The combination of results from the phase equilibria and from the composition of glasses indicates that the depth of magma storage for the IBR2 and BJR compositions may be in the range 300-400 MPa (13 km) and 200-300 MPa (10 km), respectively.
Resumo:
In the present study, items pre-exposed in a familiarization series were included in a list discrimination task to manipulate memory strength. At test, participants were required to discriminate strong targets and strong lures from weak targets and new lures. This resulted in a concordant pattern of increased "old" responses to strong targets and lures. Model estimates attributed this pattern to either equivalent increases in memory strength across the two types of items (unequal variance signal detection model) or equivalent increases in both familiarity and recollection (dual process signal detection [DPSD] model). Hippocampal activity associated with strong targets and lures showed equivalent increases compared with missed items. This remained the case when analyses were restricted to high-confidence responses considered by the DPSD model to reflect predominantly recollection. A similar pattern of activity was observed in parahippocampal cortex for high-confidence responses. The present results are incompatible with "noncriterial" or "false" recollection being reflected solely in inflated DPSD familiarity estimates and support a positive correlation between hippocampal activity and memory strength irrespective of the accuracy of list discrimination, consistent with the unequal variance signal detection model account.
Resumo:
In this paper, we use an experimental design to compare the performance of elicitation rules for subjective beliefs. Contrary to previous works in which elicited beliefs are compared to an objective benchmark, we consider a purely subjective belief framework (confidence in one’s own performance in a cognitive task and a perceptual task). The performance of different elicitation rules is assessed according to the accuracy of stated beliefs in predicting success. We measure this accuracy using two main factors: calibration and discrimination. For each of them, we propose two statistical indexes and we compare the rules’ performances for each measurement. The matching probability method provides more accurate beliefs in terms of discrimination, while the quadratic scoring rule reduces overconfidence and the free rule, a simple rule with no incentives, which succeeds in eliciting accurate beliefs. Nevertheless, the matching probability appears to be the best mechanism for eliciting beliefs due to its performances in terms of calibration and discrimination, but also its ability to elicit consistent beliefs across measures and across tasks, as well as its empirical and theoretical properties.
Resumo:
The present study was designed to examine the main and interactive effects of task demands, work control, and task information on levels of adjustment. Task demands, work control, and task information were manipulated in an experimental setting where participants completed a letter-sorting activity (N= 128). Indicators of adjustment included measures of positive mood, participants' perceptions of task performance, and task satisfaction. Results of the present study provided some support for the main effects of objective task demands, work control, and task information on levels of adjustment. At the subjective level of analysis, there was some evidence to suggest that work control and task information interacted in their effects on levels of adjustment. There was minimal support for the proposal that work control and task information would buffer the negative effects of task demands on adjustment. There was, however, some evidence to suggest that the stress-buffering role of subjective work control was more marked at high, rather than low, levels of subjective task information.
Resumo:
This paper provides an important and timely overview of a conceptual framework designed to assist with the development of message content, as well as the evaluation, of persuasive health messages. While an earlier version of this framework was presented in a prior publication by the authors in 2009, important refinements to the framework have seen it evolve in recent years, warranting the need for an updated review. This paper outlines the Step approach to Message Design and Testing (or SatMDT) in accordance with the theoretical evidence which underpins, as well as empirical evidence which demonstrates the relevance and feasibility, of each of the framework’s steps. The development and testing of the framework have thus far been based exclusively within the road safety advertising context; however, the view expressed herein is that the framework may have broader appeal and application to the health persuasion context.
Resumo:
This thesis explored the impact of non-contractual agreements in economic decisions. These statements of intent serve as a commitment device in strategic decisions and have been found to be an effective alternative to strong regulations in promoting social behaviour. Three studies have been undertaken using conceptual and methodological approaches from Behavioral and Experimental Economics. The first study explored in a public good setting the effect of public statements about intended social behaviour. The second study tested whether promises can help to promote co-operation in environments with uncertain choice options. The third study investigated a possible application of statement of intent and tested the effect of payment promises in a tax setting.
Resumo:
To this point, the collection has provided research-based, empirical accounts of the various and multiple effects of the National Assessment Program – Literacy and Numeracy (NAPLAN) in Australian schooling as a specific example of the global phenomenon of national testing. In this chapter, we want to develop a more theoretical analysis of national testing systems, globalising education policy and the promise of national testing as adaptive, online tests. These future moves claim to provide faster feedback and more useful diagnostic help for teachers. There is a utopian testing dream that one day adaptive, online tests will be responsive in real time providing an integrated personalised testing, pedagogy and intervention for each student. The moves towards these next generation assessments are well advanced, including the work of Pearson’s NextGen Learning and Assessment research group, the Organization for Economic Co-operation and Development’s (OECD) move into assessing affective skills and the Australian Curriculum, Assessment and Reporting Authority’s (ACARA) decision to phase in NAPLAN as an online, adaptive test from 2017...
Resumo:
Introduction This book examines a pressing educational issue: the global phenomenon of national testing in schooling and its vernacular development in Australia. The Australian National Assessment Program – Literacy and Numeracy (NAPLAN), introduced in 2008, involves annual census testing of students in Years 3, 5, 7 and 9 in nearly all Australian schools. In a variety of ways, NAPLAN affects the lives of Australia’s 3.5 million school students and their families, as well as more than 350,000 school staff and many other stakeholders in education. This book is organised in relation to a simple question: What are the effects of national testing for systems, schools and individuals? Of course, this simple question requires complex answers. The chapters in this edited collection consider issues relating to national testing policy, the construction of the test, usages of the testing data and various effects of testing in systems, schools and classrooms. Each chapter examines an aspect of national testing in Australia using evidence drawn from research. The final chapter by the editors of this collection provides a broader reflection on this phenomenon and situates developments in testing globally...
Resumo:
Since 2008, Australian schoolchildren in Years 3, 5, 7 and 9 have sat a series of tests each May designed to assess their attainment of basic skills in literacy and numeracy. These tests are known as the National Assessment Program – Literacy and Numeracy (NAPLAN). In 2010, individual school NAPLAN data were first published on the MySchool website which enables comparisons to be made between individual schools and statistically like schools across Australia. NAPLAN represents the increased centrality of the federal government in education, particularly in regards to education policy. One effect of this has been a recast emphasis of education as an economic, rather than democratic, good. As Reid (2009) suggests, this recasting of education within national productivity agendas mobilises commonsense discourses of accountability and transparency. These are common articles of faith for many involved in education administration and bureaucracy; more and better data, and holding people to account for that data, must improve education...