911 resultados para IEEE 802.15.4
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
Monogr??fico con el t??tulo: "La formaci??n del profesorado: una perspectiva m??ltiple"
Resumo:
There is increasing interest in combining Phases II and III of clinical development into a single trial in which one of a small number of competing experimental treatments is ultimately selected and where a valid comparison is made between this treatment and the control treatment. Such a trial usually proceeds in stages, with the least promising experimental treatments dropped as soon as possible. In this paper we present a highly flexible design that uses adaptive group sequential methodology to monitor an order statistic. By using this approach, it is possible to design a trial which can have any number of stages, begins with any number of experimental treatments, and permits any number of these to continue at any stage. The test statistic used is based upon efficient scores, so the method can be easily applied to binary, ordinal, failure time, or normally distributed outcomes. The method is illustrated with an example, and simulations are conducted to investigate its type I error rate and power under a range of scenarios.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
Fitness of hybrids between genetically modified (GM) crops and wild relatives influences the likelihood of ecological harm. We measured fitness components in spontaneous (non-GM) rapeseed x Brassica rapa hybrids in natural populations. The F-1 hybrids yielded 46.9% seed output of B. rapa, were 16.9% as effective as males on B. rapa and exhibited increased self-pollination. Assuming 100% GM rapeseed cultivation, we conservatively predict < 7000 second-generation transgenic hybrids annually in the United Kingdom (i.e. similar to 20% of F-1 hybrids). Conversely, whilst reduced hybrid fitness improves feasibility of bio-containment, stage projection matrices suggests broad scope for some transgenes to offset this effect by enhancing fitness.
Resumo:
Staff reform can mean improved outcomes, but only if school managers plan carefully. Brian Fidler and Tessa Atton introduce a major new series for MST on workforce remodelling. This is the first of four articles studying workforce remodelling and teaching and learning responsibilities (TLRs).
Resumo:
Aim: Previous systematic reviews have found that drug-related morbidity accounts for 4.3% of preventable hospital admissions. None, however, has identified the drugs most commonly responsible for preventable hospital admissions. The aims of this study were to estimate the percentage of preventable drug-related hospital admissions, the most common drug causes of preventable hospital admissions and the most common underlying causes of preventable drug-related admissions. Methods: Bibliographic databases and reference lists from eligible articles and study authors were the sources for data. Seventeen prospective observational studies reporting the proportion of preventable drug-related hospital admissions, causative drugs and/or the underlying causes of hospital admissions were selected. Included studies used multiple reviewers and/or explicit criteria to assess causality and preventability of hospital admissions. Two investigators abstracted data from all included studies using a purpose-made data extraction form. Results: From 13 papers the median percentage of preventable drug-related admissions to hospital was 3.7% (range 1.4-15.4). From nine papers the majority (51%) of preventable drug-related admissions involved either antiplatelets (16%), diuretics (16%), nonsteroidal anti-inflammatory drugs (11%) or anticoagulants (8%). From five studies the median proportion of preventable drug-related admissions associated with prescribing problems was 30.6% (range 11.1-41.8), with adherence problems 33.3% (range 20.9-41.7) and with monitoring problems 22.2% (range 0-31.3). Conclusions: Four groups of drugs account for more than 50% of the drug groups associated with preventable drug-related hospital admissions. Concentrating interventions on these drug groups could reduce appreciably the number of preventable drug-related admissions to hospital from primary care.
Resumo:
This paper analyzes the performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under transmission errors. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduces energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF) in an ideal channel environment. However, there is a possibility that this expected gain may decrease in the presence of transmission errors. In this work, we modify the saturation throughput model of ErDCF to accurately reflect the impact of transmission errors under different rate combinations. It turns out that the throughput gain of ErDCF can still be maintained under reasonable link quality and distance.
Resumo:
In this paper we evaluate the performance of our earlier proposed enhanced relay-enabled distributed coordination function (ErDCF) for wireless ad hoc networks. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 distributed coordination function (DCF). This is a result of. 1) using relay which helps to increase the throughput and lower overall blocking time of nodes due to faster dual-hop transmission, 2) using dynamic preamble (i.e. using short preamble for the relay transmission) which further increases the throughput and lower overall blocking time and also by 3) reducing unnecessary overhearing (by other nodes not involved in transmission). We evaluate the throughput and energy performance of the ErDCF with different rate combinations. ErDCF (11,11) (ie. R1=R2=11 Mbps) yields a throughput improvement of 92.9% (at the packet length of 1000 bytes) and an energy saving of 72.2% at 50 nodes.
Resumo:
In this paper we propose an enhanced relay-enabled distributed coordination function (rDCF) for wireless ad hoc networks. The idea of rDCF is to use high data rate nodes to work as relays for the low data rate nodes. The relay helps to increase the throughput and lower overall blocking time of nodes due to faster dual-hop transmission. rDCF achieves higher throughput over IEEE 802.11 distributed coordination function (DCF). The protocol is further enhanced for higher throughput and reduced energy. These enhancements result from the use of a dynamic preamble (i.e. using short preamble for the relay transmission) and also by reducing unnecessary overhearing (by other nodes not involved in transmission). We have modeled the energy consumption of rDCF, showing that rDCF provides an energy efficiency of 21.7% at 50 nodes over 802.11 DCF. Compared with the existing rDCF, the enhanced rDCF (ErDCF) scheme proposed in this paper yields a throughput improvement of 16.54% (at the packet length of 1000 bytes) and an energy saving of 53% at 50 nodes.
Resumo:
This paper analyzes the performance of enhanced relay-enabled distributed coordination function (ErDCF) for wireless ad hoc networks under transmission errors. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduces energy consumption compared to IEEE 802.11 distributed coordination function (DCF) in an ideal channel environment. However, there is a possibility that this expected gain may decrease in the presence of transmission errors. In this work, we modify the saturation throughput model of ErDCF to accurately reflect the impact of transmission errors under different rate combinations. It turns out that the throughput gain of ErDCF can still be maintained under reasonable link quality and distance.
Resumo:
This paper describes a method for reconstructing 3D frontier points, contour generators and surfaces of anatomical objects or smooth surfaces from a small number, e. g. 10, of conventional 2D X-ray images. The X-ray images are taken at different viewing directions with full prior knowledge of the X-ray source and sensor configurations. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the number of viewing directions is fixed and the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The technique may be used not only in medicine but also in industrial applications.
Resumo:
This paper presents an approach for automatic classification of pulsed Terahertz (THz), or T-ray, signals highlighting their potential in biomedical, pharmaceutical and security applications. T-ray classification systems supply a wealth of information about test samples and make possible the discrimination of heterogeneous layers within an object. In this paper, a novel technique involving the use of Auto Regressive (AR) and Auto Regressive Moving Average (ARMA) models on the wavelet transforms of measured T-ray pulse data is presented. Two example applications are examined - the classi. cation of normal human bone (NHB) osteoblasts against human osteosarcoma (HOS) cells and the identification of six different powder samples. A variety of model types and orders are used to generate descriptive features for subsequent classification. Wavelet-based de-noising with soft threshold shrinkage is applied to the measured T-ray signals prior to modeling. For classi. cation, a simple Mahalanobis distance classi. er is used. After feature extraction, classi. cation accuracy for cancerous and normal cell types is 93%, whereas for powders, it is 98%.
Resumo:
The production and release of dissolved organic carbon (DOC) from peat soils is thought to be sensitive to changes in climate, specifically changes in temperature and rainfall. However, little is known about the actual rates of net DOC production in response to temperature and water table draw-down, particularly in comparison to carbon dioxide (CO2) fluxes. To explore these relationships, we carried out a laboratory experiment on intact peat soil cores under controlled temperature and water table conditions to determine the impact and interaction of each of these climatic factors on net DOC production. We found a significant interaction (P < 0.001) between temperature, water table draw-down and net DOC production across the whole soil core (0 to −55 cm depth). This corresponded to an increase in the Q10 (i.e. rise in the rate of net DOC production over a 10 °C range) from 1.84 under high water tables and anaerobic conditions to 3.53 under water table draw-down and aerobic conditions between −10 and − 40 cm depth. However, increases in net DOC production were only seen after water tables recovered to the surface as secondary changes in soil water chemistry driven by sulphur redox reactions decreased DOC solubility, and therefore DOC concentrations, during periods of water table draw-down. Furthermore, net microbial consumption of DOC was also apparent at − 1 cm depth and was an additional cause of declining DOC concentrations during dry periods. Therefore, although increased temperature and decreased rainfall could have a significant effect on net DOC release from peatlands, these climatic effects could be masked by other factors controlling the biological consumption of DOC in addition to soil water chemistry and DOC solubility. These findings highlight both the sensitivity of DOC release from ombrotrophic peat to episodic changes in water table draw-down, and the need to disentangle complex and interacting controls on DOC dynamics to fully understand the impact of environmental change on this system.
Resumo:
Background: Insulin sensitivity (Si) is improved by weight loss and exercise, but the effects of the replacement of saturated fatty acids (SFAs) with monounsaturated fatty acids (MUFAs) or carbohydrates of high glycemic index (HGI) or low glycemic index (LGI) are uncertain. Objective: We conducted a dietary intervention trial to study these effects in participants at risk of developing metabolic syndrome. Design: We conducted a 5-center, parallel design, randomized controlled trial [RISCK (Reading, Imperial, Surrey, Cambridge, and Kings)]. The primary and secondary outcomes were changes in Si (measured by using an intravenous glucose tolerance test) and cardiovascular risk factors. Measurements were made after 4 wk of a high-SFA and HGI (HS/HGI) diet and after a 24-wk intervention with HS/HGI (reference), high-MUFA and HGI (HM/HGI), HM and LGI (HM/LGI), low-fat and HGI (LF/HGI), and LF and LGI (LF/LGI) diets. Results: We analyzed data for 548 of 720 participants who were randomly assigned to treatment. The median Si was 2.7 × 10−4 mL · μU−1 · min−1 (interquartile range: 2.0, 4.2 × 10−4 mL · μU−1 · min−1), and unadjusted mean percentage changes (95% CIs) after 24 wk treatment (P = 0.13) were as follows: for the HS/HGI group, −4% (−12.7%, 5.3%); for the HM/HGI group, 2.1% (−5.8%, 10.7%); for the HM/LGI group, −3.5% (−10.6%, 4.3%); for the LF/HGI group, −8.6% (−15.4%, −1.1%); and for the LF/LGI group, 9.9% (2.4%, 18.0%). Total cholesterol (TC), LDL cholesterol, and apolipoprotein B concentrations decreased with SFA reduction. Decreases in TC and LDL-cholesterol concentrations were greater with LGI. Fat reduction lowered HDL cholesterol and apolipoprotein A1 and B concentrations. Conclusions: This study did not support the hypothesis that isoenergetic replacement of SFAs with MUFAs or carbohydrates has a favorable effect on Si. Lowering GI enhanced reductions in TC and LDL-cholesterol concentrations in subjects, with tentative evidence of improvements in Si in the LF-treatment group. This trial was registered at clinicaltrials.gov as ISRCTN29111298.