966 resultados para Fixed Block size Transform Coding
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
Assaying a large number of genetic markers from patients in clinical trials is now possible in order to tailor drugs with respect to efficacy. The statistical methodology for analysing such massive data sets is challenging. The most popular type of statistical analysis is to use a univariate test for each genetic marker, once all the data from a clinical study have been collected. This paper presents a sequential method for conducting an omnibus test for detecting gene-drug interactions across the genome, thus allowing informed decisions at the earliest opportunity and overcoming the multiple testing problems from conducting many univariate tests. We first propose an omnibus test for a fixed sample size. This test is based on combining F-statistics that test for an interaction between treatment and the individual single nucleotide polymorphism (SNP). As SNPs tend to be correlated, we use permutations to calculate a global p-value. We extend our omnibus test to the sequential case. In order to control the type I error rate, we propose a sequential method that uses permutations to obtain the stopping boundaries. The results of a simulation study show that the sequential permutation method is more powerful than alternative sequential methods that control the type I error rate, such as the inverse-normal method. The proposed method is flexible as we do not need to assume a mode of inheritance and can also adjust for confounding factors. An application to real clinical data illustrates that the method is computationally feasible for a large number of SNPs. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
A study or experiment can be described as sequential if its design includes one or more interim analyses at which it is possible to stop the study, having reached a definitive conclusion concerning the primary question of interest. The potential of the sequential study to terminate earlier than the equivalent fixed sample size study means that, typically, there are ethical and economic advantages to be gained from using a sequential design. These advantages have secured a place for the methodology in the conduct of many clinical trials of novel therapies. Recently, there has been increasing interest in pharmacogenetics: the study of how DNA variation in the human genome affects the safety and efficacy of drugs. The potential for using sequential methodology in pharmacogenetic studies is considered and the conduct of candidate gene association studies, family-based designs and genome-wide association studies within the sequential setting is explored. The objective is to provide a unified framework for the conduct of these types of studies as sequential designs and hence allow experimenters to consider using sequential methodology in their future pharmacogenetic studies.
Resumo:
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.
Resumo:
Background Appropriately conducted adaptive designs (ADs) offer many potential advantages over conventional trials. They make better use of accruing data, potentially saving time, trial participants, and limited resources compared to conventional, fixed sample size designs. However, one can argue that ADs are not implemented as often as they should be, particularly in publicly funded confirmatory trials. This study explored barriers, concerns, and potential facilitators to the appropriate use of ADs in confirmatory trials among key stakeholders. Methods We conducted three cross-sectional, online parallel surveys between November 2014 and January 2015. The surveys were based upon findings drawn from in-depth interviews of key research stakeholders, predominantly in the UK, and targeted Clinical Trials Units (CTUs), public funders, and private sector organisations. Response rates were as follows: 30(55 %) UK CTUs, 17(68 %) private sector, and 86(41 %) public funders. A Rating Scale Model was used to rank barriers and concerns in order of perceived importance for prioritisation. Results Top-ranked barriers included the lack of bridge funding accessible to UK CTUs to support the design of ADs, limited practical implementation knowledge, preference for traditional mainstream designs, difficulties in marketing ADs to key stakeholders, time constraints to support ADs relative to competing priorities, lack of applied training, and insufficient access to case studies of undertaken ADs to facilitate practical learning and successful implementation. Associated practical complexities and inadequate data management infrastructure to support ADs were reported as more pronounced in the private sector. For funders of public research, the inadequate description of the rationale, scope, and decision-making criteria to guide the planned AD in grant proposals by researchers were all viewed as major obstacles. Conclusions There are still persistent and important perceptions of individual and organisational obstacles hampering the use of ADs in confirmatory trials research. Stakeholder perceptions about barriers are largely consistent across sectors, with a few exceptions that reflect differences in organisations’ funding structures, experiences and characterisation of study interventions. Most barriers appear connected to a lack of practical implementation knowledge and applied training, and limited access to case studies to facilitate practical learning. Keywords: Adaptive designs; flexible designs; barriers; surveys; confirmatory trials; Phase 3; clinical trials; early stopping; interim analyses
Resumo:
Background: The need for multiple clinical visits remains a barrier to women accessing safe legal medical abortion services. Alternatives to routine clinic follow-up visits have not been assessed in rural low-resource settings. We compared the effectiveness of standard clinic follow-up versus home assessment of outcome of medical abortion in a low-resource setting. Methods: This randomised, controlled, non-inferiority trial was done in six health centres (three rural, three urban) in Rajasthan, India. Women seeking early medical abortion up to 9 weeks of gestation were randomly assigned (1:1) to either routine clinic follow-up or self-assessment at home. Randomisation was done with a computer-generated randomisation sequence, with a block size of six. The study was not blinded. Women in the home-assessment group were advised to use a pictorial instruction sheet and take a low-sensitivity urine pregnancy test at home, 10-14 days after intake of mifepristone, and were contacted by a home visit or telephone call to record the outcome of the abortion. The primary (non-inferiority) outcome was complete abortion without continuing pregnancy or need for surgical evacuation or additional mifepristone and misoprostol. The non-inferiority margin for the risk difference was 5%. All participants with a reported primary outcome and who followed the clinical protocol were included in the analysis. This study is registered with ClinicalTrials.gov, number NCT01827995. Findings: Between April 23, 2013, and May 15, 2014, 731 women were recruited and assigned to clinic follow-up (n=366) or home assessment (n=365), of whom 700 were analysed for the main outcomes (n=336 and n=364, respectively). Complete abortion without continuing pregnancy, surgical intervention, or additional mifepristone and misoprostol was reported in 313 (93%) of 336 women in the clinic follow-up group and 347 (95%) of 364 women in the home-assessment group (difference -2.2%, 95% CI -5.9 to 1.6). One case of haemorrhage occurred in each group (rate of adverse events 0.3% in each group); no other adverse events were noted. Interpretation Home assessment of medical abortion outcome with a low-sensitivity urine pregnancy test is non-inferior to clinic follow-up, and could be introduced instead of a clinic follow-up visit in a low-resource setting.
Resumo:
Foram estudados citogeneticamente um total de 30 animais das espécies D. prymnolopha (N=20), D. leporina (N=6), D. fuliginosa (N=1) e Dasyprocta sp. (N=3) (Dasyproctidae, Histricognathi). As preparações cromossômicas foram obtidas do cultivo de sangue periférico, além de medula óssea e baço em D. prymnolopha e D. leporina. O número diplóide foi de 64/65 em todos os exemplares. O cariótipo mostrou similaridade, não sendo detectado, através de coloração convencional de giemsa e de banda G, polimorfismo cromossômico em qualquer uma das espécies estudadas. A distribuição da heterocromatina constitutiva na região pericentromérica de todos os cromossomos foi similar nas quatro espécies. D. prymnolopha, D. leporina e Dasyprocta sp. apresentaram variação no tamanho do bloco heterocromático em um dos homólogos do par A18. D. fuliginosa apresentou a heterocromatina uniformemente distribuída em todos os cromossomos. Não houve variação no padrão das RONs entre as espécies estudadas.
Resumo:
In the seed production system, genetic purity is one of the fundamental requirements for its commercialization. The present work had the goal of determined the sample size for genetic purity evaluation, in order to protect the seed consumer and the producer and to evaluate the sensitivity of microsatellite technique for discriminating hybrids from their respective relatives and for detecting mixtures when they are present in small amounts in the samples. For the sequential sampling, hybrid seeds were marked and mixed in with the seed lots, simulating the following levels of contamination: 0.25, 0.5, 1.0, 2.0, 4.0, and 6.0%. After this, groups of 40 seeds were taken in sequence, up to a maximum of 400 seeds, with the objective of determining the quantity of seeds necessary to detect the percentage of mixture mentioned above. The sensitivity of microsatellite technique was evaluated by mixing different proportions of DNA from the hybrids with their respective seed lines. For the level of mixture was higher than 1:8 (1P1:8P2; 8P1:1P2), the sensitivity of the marker in detecting different proportions of the mixture varied according to the primer used. In terms of the sequential sampling, it was verified that in order to detect mixture levels higher than 1% within the seed lot- with a risk level for both the producer and the consumer of 0.05- the size of the necessary sample was smaller than the size needed for the fixed sample size. This also made it possible to reduce costs, making it possible to use microsatellites to certify the genetic purity of corn seeds lots.
Resumo:
Multi-input multi-output (MIMO) technology is an emerging solution for high data rate wireless communications. We develop soft-decision based equalization techniques for frequency selective MIMO channels in the quest for low-complexity equalizers with BER performance competitive to that of ML sequence detection. We first propose soft decision equalization (SDE), and demonstrate that decision feedback equalization (DFE) based on soft-decisions, expressed via the posterior probabilities associated with feedback symbols, is able to outperform hard-decision DFE, with a low computational cost that is polynomial in the number of symbols to be recovered, and linear in the signal constellation size. Building upon the probabilistic data association (PDA) multiuser detector, we present two new MIMO equalization solutions to handle the distinctive channel memory. With their low complexity, simple implementations, and impressive near-optimum performance offered by iterative soft-decision processing, the proposed SDE methods are attractive candidates to deliver efficient reception solutions to practical high-capacity MIMO systems. Motivated by the need for low-complexity receiver processing, we further present an alternative low-complexity soft-decision equalization approach for frequency selective MIMO communication systems. With the help of iterative processing, two detection and estimation schemes based on second-order statistics are harmoniously put together to yield a two-part receiver structure: local multiuser detection (MUD) using soft-decision Probabilistic Data Association (PDA) detection, and dynamic noise-interference tracking using Kalman filtering. The proposed Kalman-PDA detector performs local MUD within a sub-block of the received data instead of over the entire data set, to reduce the computational load. At the same time, all the inter-ference affecting the local sub-block, including both multiple access and inter-symbol interference, is properly modeled as the state vector of a linear system, and dynamically tracked by Kalman filtering. Two types of Kalman filters are designed, both of which are able to track an finite impulse response (FIR) MIMO channel of any memory length. The overall algorithms enjoy low complexity that is only polynomial in the number of information-bearing bits to be detected, regardless of the data block size. Furthermore, we introduce two optional performance-enhancing techniques: cross- layer automatic repeat request (ARQ) for uncoded systems and code-aided method for coded systems. We take Kalman-PDA as an example, and show via simulations that both techniques can render error performance that is better than Kalman-PDA alone and competitive to sphere decoding. At last, we consider the case that channel state information (CSI) is not perfectly known to the receiver, and present an iterative channel estimation algorithm. Simulations show that the performance of SDE with channel estimation approaches that of SDE with perfect CSI.
Resumo:
Multi-parametric and quantitative magnetic resonance imaging (MRI) techniques have come into the focus of interest, both as a research and diagnostic modality for the evaluation of patients suffering from mild cognitive decline and overt dementia. In this study we address the question, if disease related quantitative magnetization transfer effects (qMT) within the intra- and extracellular matrices of the hippocampus may aid in the differentiation between clinically diagnosed patients with Alzheimer disease (AD), patients with mild cognitive impairment (MCI) and healthy controls. We evaluated 22 patients with AD (n=12) and MCI (n=10) and 22 healthy elderly (n=12) and younger (n=10) controls with multi-parametric MRI. Neuropsychological testing was performed in patients and elderly controls (n=34). In order to quantify the qMT effects, the absorption spectrum was sampled at relevant off-resonance frequencies. The qMT-parameters were calculated according to a two-pool spin-bath model including the T1- and T2 relaxation parameters of the free pool, determined in separate experiments. Histograms (fixed bin-size) of the normalized qMT-parameter values (z-scores) within the anterior and posterior hippocampus (hippocampal head and body) were subjected to a fuzzy-c-means classification algorithm with downstreamed PCA projection. The within-cluster sums of point-to-centroid distances were used to examine the effects of qMT- and diffusion anisotropy parameters on the discrimination of healthy volunteers, patients with Alzheimer and MCIs. The qMT-parameters T2(r) (T2 of the restricted pool) and F (fractional pool size) differentiated between the three groups (control, MCI and AD) in the anterior hippocampus. In our cohort, the MT ratio, as proposed in previous reports, did not differentiate between MCI and AD or healthy controls and MCI, but between healthy controls and AD.
Resumo:
BACKGROUND Giant cell arteritis is an immune-mediated disease of medium and large-sized arteries that affects mostly people older than 50 years of age. Treatment with glucocorticoids is the gold-standard and prevents severe vascular complications but is associated with substantial morbidity and mortality. Tocilizumab, a humanised monoclonal antibody against the interleukin-6 receptor, has been associated with rapid induction and maintenance of remission in patients with giant cell arteritis. We therefore aimed to study the efficacy and safety of tocilizumab in the first randomised clinical trial in patients with newly diagnosed or recurrent giant cell arteritis. METHODS In this single centre, phase 2, randomised, double-blind, placebo-controlled trial, we recruited patients aged 50 years and older from University Hospital Bern, Switzerland, who met the 1990 American College of Rheumatology criteria for giant cell arteritis. Patients with new-onset or relapsing disease were randomly assigned (2:1) to receive either tocilizumab (8 mg/kg) or placebo intravenously. 13 infusions were given in 4 week intervals until week 52. Both groups received oral prednisolone, starting at 1 mg/kg per day and tapered down to 0 mg according to a standard reduction scheme defined in the study protocol. Allocation to treatment groups was done using a central computerised randomisation procedure with a permuted block design and a block size of three, and concealed using central randomisation generated by the clinical trials unit. Patients, investigators, and study personnel were masked to treatment assignment. The primary outcome was the proportion of patients who achieved complete remission of disease at a prednisolone dose of 0·1 mg/kg per day at week 12. All analyses were intention to treat. This trial is registered with ClinicalTrials.gov, number NCT01450137. RESULTS Between March 3, 2012, and Sept 9, 2014, 20 patients were randomly assigned to receive tocilizumab and prednisolone, and ten patients to receive placebo and glucocorticoid; 16 (80%) and seven (70%) patients, respectively, had new-onset giant cell arteritis. 17 (85%) of 20 patients given tocilizumab and four (40%) of ten patients given placebo reached complete remission by week 12 (risk difference 45%, 95% CI 11-79; p=0·0301). Relapse-free survival was achieved in 17 (85%) patients in the tocilizumab group and two (20%) in the placebo group by week 52 (risk difference 65%, 95% CI 36-94; p=0·0010). The mean survival-time difference to stop glucocorticoids was 12 weeks in favour of tocilizumab (95% CI 7-17; p<0·0001), leading to a cumulative prednisolone dose of 43 mg/kg in the tocilizumab group versus 110 mg/kg in the placebo group (p=0·0005) after 52 weeks. Seven (35%) patients in the tocilizumab group and five (50%) in the placebo group had serious adverse events. INTERPRETATION Our findings show, for the first time in a trial setting, the efficacy of tocilizumab in the induction and maintenance of remission in patients with giant cell arteritis. FUNDING Roche and the University of Bern.
Resumo:
The reliability of bidirectional communication link can be guaranteed with Automatic Repeat Request Procedures (ARQ). The standard STANAG 5066 describes the ARQ procedure for HF communications that can either be applied to existing HF physical layers modems or adapted to future physical layer designs. In this contribution the physical layer parameters of an HF modem (HFDVL), developed by the authors over the last decade, are chosen to optimize the performance of the ARQ procedure described in STANAG 5066. Besides the interleaving length, constellation size and coding type, the OFDM-based HFDVL modem permits the selection of the number of receiver antennas. It will be shown that this parameter gives additional degrees of freedom and permits reliable communication over low SNR HF communication links.
Resumo:
Three novel families of transposable elements, Wukong, Wujin, and Wuneng, are described in the yellow fever mosquito, Aedes aegypti. Their copy numbers range from 2,100 to 3,000 per haploid genome. There are high degrees of sequence similarity within each family, and many structural but not sequence similarities between families. The common structural characteristics include small size, no coding potential, terminal inverted repeats, potential to form a stable secondary structure, A+T richness, and putative 2- to 4-bp A+T-biased specific target sites. Evidence of previous mobility is presented for the Wukong elements. Elements of these three families are associated with 7 of 16 fully or partially sequenced Ae. aegypti genes. Characteristics of these mosquito elements indicate strong similarities to the miniature inverted-repeat transposable elements (MITEs) recently found to be associated with plant genes. MITE-like elements have also been reported in two species of Xenopus and in Homo sapiens. This characterization of multiple families of highly repetitive MITE-like elements in an invertebrate extends the range of these elements in eukaryotic genomes. A hypothesis is presented relating genome size and organization to the presence of highly reiterated MITE families. The association of MITE-like elements with Ae. aegypti genes shows the same bias toward noncoding regions as in plants. This association has potentially important implications for the evolution of gene regulation.
Resumo:
It is shown that there exists a triangle decomposition of the graph obtained from the complete graph of order v by removing the edges of two vertex disjoint complete subgraphs of orders u and w if and only if u, w, and v are odd, ((v)(2)) - ((u)(2)) - ((w)(2)) equivalent to 0 (mod 3), and v >= w + u + max {u, w}. Such decompositions are equivalent to group divisible designs with block size 3, one group of size u, one group of size w, and v - u - w groups of size 1. This result settles the existence problem for Steiner triple systems having two disjoint specified subsystems, thereby generalizing the well-known theorem of Doyen and Wilson on the existence of Steiner triple systems with a single specified subsystem. (c) 2005 Wiley Periodicals, Inc.
Resumo:
This paper outlines the methodology of blast fragmentation modeling undertaken for a green field feasibility study at the Riska gold deposit in Indonesia. The favoured milling process for the feasibility study was dump leaching,with no crushing of the ore material extracted from the pit. For this reason,blast fragmentation was a critical issue to be addressed by the study. A range of blast designs were considered with bench heights and blasthole diameters ranging from 4 m to 7 m and 76 mm to 102 mm respectively. Rock mass data was obtained from 19 diamond drill cores across the deposit (total drill length approximately 2200 m). Intact rock strength was estimated from qualitative strength descriptors,while the in situ block size distribution of the rock mass was estimated from the Rock Quality Designation (RQD) of the core.