995 resultados para per-survivor processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter addresses the issue of joint space-time trellis decoding and channel estimation in time-varying fading channels that are spatially and temporally correlated. A recursive space-time receiver which incorporates per-survivor processing (PSP) and Kalman filtering into the Viterbi algorithm is proposed. This approach generalizes existing work to the correlated fading channel case. The channel time-evolution is modeled by a multichannel autoregressive process, and a bank of Kalman filters is used to track the channel variations. Computer simulation results show that a performance close to the maximum likelihood receiver with perfect channel state information (CSI) can be obtained. The effects of the spatial correlation on the performance of a receiver that assumes independent fading channels are examined.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of centralized routing and scheduling for IEEE 802.16 mesh networks so as to provide Quality of Service (QoS) to individual real and interactive data applications. We first obtain an optimal and fair routing and scheduling policy for aggregate demands for different source- destination pairs. We then present scheduling algorithms which provide per flow QoS guarantees while utilizing the network resources efficiently. Our algorithms are also scalable: they do not require per flow processing and queueing and the computational requirements are modest. We have verified our algorithms via extensive simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In an effort to reduce Interlibrary borrowing activity, while enhancing the Library collection, the Bertrand Library has initiated a program to purchase current monographs requested through ILL by Bucknell University students and faculty. The results have been a successful reduction in ILL workload, and a cost-effective means of document delivery as measured by average delivery time, cost-per-title, processing costs, and circulation statistics. This procedure reflects an overall change in our philosophy concerning document access and delivery, which led to the reorganization of ILL services and staff in the Bertrand Library.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In piattaforme di Stream Processing è spesso necessario eseguire elaborazioni differenziate degli stream di input. Questa tesi ha l'obiettivo di realizzare uno scheduler in grado di attribuire priorità di esecuzione differenti agli operatori deputati all'elaborazione degli stream.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La tesi propone una soluzione middleware per scenari in cui i sensori producono un numero elevato di dati che è necessario gestire ed elaborare attraverso operazioni di preprocessing, filtering e buffering al fine di migliorare l'efficienza di comunicazione e del consumo di banda nel rispetto di vincoli energetici e computazionali. E'possibile effettuare l'ottimizzazione di questi componenti attraverso operazioni di tuning remoto.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Big data è il termine usato per descrivere una raccolta di dati così estesa in termini di volume,velocità e varietà da richiedere tecnologie e metodi analitici specifici per l'estrazione di valori significativi. Molti sistemi sono sempre più costituiti e caratterizzati da enormi moli di dati da gestire,originati da sorgenti altamente eterogenee e con formati altamente differenziati,oltre a qualità dei dati estremamente eterogenei. Un altro requisito in questi sistemi potrebbe essere il fattore temporale: sempre più sistemi hanno bisogno di ricevere dati significativi dai Big Data il prima possibile,e sempre più spesso l’input da gestire è rappresentato da uno stream di informazioni continuo. In questo campo si inseriscono delle soluzioni specifiche per questi casi chiamati Online Stream Processing. L’obiettivo di questa tesi è di proporre un prototipo funzionante che elabori dati di Instant Coupon provenienti da diverse fonti con diversi formati e protocolli di informazioni e trasmissione e che memorizzi i dati elaborati in maniera efficiente per avere delle risposte in tempo reale. Le fonti di informazione possono essere di due tipologie: XMPP e Eddystone. Il sistema una volta ricevute le informazioni in ingresso, estrapola ed elabora codeste fino ad avere dati significativi che possono essere utilizzati da terze parti. Lo storage di questi dati è fatto su Apache Cassandra. Il problema più grosso che si è dovuto risolvere riguarda il fatto che Apache Storm non prevede il ribilanciamento delle risorse in maniera automatica, in questo caso specifico però la distribuzione dei clienti durante la giornata è molto varia e ricca di picchi. Il sistema interno di ribilanciamento sfrutta tecnologie innovative come le metriche e sulla base del throughput e della latenza esecutiva decide se aumentare/diminuire il numero di risorse o semplicemente non fare niente se le statistiche sono all’interno dei valori di soglia voluti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background When observers are asked to identify two targets in rapid sequence, they often suffer profound performance deficits for the second target, even when the spatial location of the targets is known. This attentional blink (AB) is usually attributed to the time required to process a previous target, implying that a link should exist between individual differences in information processing speed and the AB. Methodology/Principal Findings The present work investigated this question by examining the relationship between a rapid automatized naming task typically used to assess information-processing speed and the magnitude of the AB. The results indicated that faster processing actually resulted in a greater AB, but only when targets were presented amongst high similarity distractors. When target-distractor similarity was minimal, processing speed was unrelated to the AB. Conclusions/Significance Our findings indicate that information-processing speed is unrelated to target processing efficiency per se, but rather to individual differences in observers' ability to suppress distractors. This is consistent with evidence that individuals who are able to avoid distraction are more efficient at deploying temporal attention, but argues against a direct link between general processing speed and efficient information selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates demodulation of differentially phase modulated signals DPMS using optimal HMM filters. The optimal HMM filter presented in the paper is computationally of order N3 per time instant, where N is the number of message symbols. Previously, optimal HMM filters have been of computational order N4 per time instant. Also, suboptimal HMM filters have be proposed of computation order N2 per time instant. The approach presented in this paper uses two coupled HMM filters and exploits knowledge of ...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study qualitatively examined an 8 week group exercise and counseling intervention for breast and prostate cancer survivors. Groups exercised 3 days per week, 50 minutes per session,performing moderate intensity aerobic and resistance training. Groups also underwent 90 minute supportive group psychotherapy sessions once per week. Survivors discussed their experiences in focus groups post intervention. Transcripts were analyzed using interpretative phenomenological analysis. Survivors described how exercise facilitated counseling by creating mutual aid and trust, and counseling helped participants with self-identity, sexuality, and returning to normalcy. When possible, counselors and fitness professionals should create partnerships to optimally support cancer survivors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Khaya senegalensis, African mahogany, a high-value hardwood, was introduced in the Northern Territory (NT) in the 1950s; included in various trials there and at Weipa, Q in the 1960s-1970s; planted on ex mine sites at Weipa (160 ha) until 1985; revived in farm plantings in Queensland and in trials in the NT in the 1990s; adopted for large-scale, annual planting in the Douglas-Daly region, NT from 2006 and is to have the planted area in the NT extended to at least 20,000 ha. The recent serious interest from plantation growers, including Forest Enterprises Australia Ltd (FEA), has seen the establishment of some large scale commercial plantations. FEA initiated the current study to process relatively young plantation stands from both Northern Territory and Queensland plantations to investigate the sawn wood and veneer recovery and quality from trees ranging from 14 years (NT – 36 trees) to 18-20 years (North Queensland – 31 trees). Field measures of tree size and straightness were complemented with log end splitting assessment and cross-sectional disc sample collection for laboratory wood properties measurements including colour and shrinkage. End-splitting scores assessed on sawn logs were relatively low compared to fast grown plantation eucalypts and did not impact processing negatively. Heartwood proportion in individual trees ranged from 50% up to 92 % of butt cross-sectional disc area for the visually-assessed dark coloured central heartwood and lighter coloured transition wood combined. Dark central heartwood proportion was positively related to tree size (R2 = 0.57). Chemical tests failed to assist in determining heartwood – sapwood boundary. Mean basic density of whole disc samples was 658 kg/m3 and ranged among trees from 603 to 712 kg/m3. When freshly sawn, the heartwood of African mahogany was orange-red to red. Transition wood appeared to be pinkish and the sapwood was a pale yellow colour. Once air dried the heartwood colour generally darkens to pinkish-brown or orange-brown and the effect of prolonged time and sun exposure is to darken and change the heartwood to a red-brown colour. A portable colour measurement spectrophotometer was used to objectively assess colour variation in CIE L*, a* and b* values over time with drying and exposure to sunlight. Capacity to predict standard colour values accurately after varying periods of direct sunlight exposure using results obtained on initial air-dried surfaces decreased with increasing time to sun exposure. The predictions are more accurate for L* values which represent brightness than for variation in the a* values (red spectrum). Selection of superior breeding trees for colour is likely to be based on dried samples exposed to sunlight to reliably highlight wood colour differences. A generally low ratio between tangential and radial shrinkages was found, which was reflected in a low incidence of board distortion (particularly cupping) during drying. A preliminary experiment was carried out to investigate the quality of NIR models to predict shrinkage and density. NIR spectra correlated reasonably well with radial shrinkage and air dried density. When calibration models were applied to their validation sets, radial shrinkage was predicted to an accuracy of 76% with Standard Error of Prediction of 0.21%. There was also a strong predictive power for wood density. These are encouraging results suggesting that NIR spectroscopy has good potential to be used as a non-destructive method to predict shrinkage and wood density using 12mm diameter increment core samples. Average green off saw recovery was 49.5% (range 40 to 69%) for Burdekin Agricultural College (BAC) logs and 41.9% (range 20 to 61%) for Katherine (NT) logs. These figures are about 10% higher than compared to 30-year-old Khaya study by Armstrong et al. (2007) however they are inflated as the green boards were not docked to remove wane prior to being tallied. Of the recovered sawn, dried and dressed volume from the BAC logs, based on the cambial face of boards, 27% could potentially be used for select grade, 40% for medium feature grade and 26% for high feature grades. The heart faces had a slightly higher recovery of select (30%) and medium feature (43%) grade boards with a reduction in the volume of high feature (22%) and reject (6%) grade boards. Distribution of board grades for the NT site aged 14 years followed very similar trends to those of the BAC site boards with an average (between facial and cambial face) 27% could potentially be used for select grade, 42% for medium feature grade, 26% for high feature grade and 5% reject. Relatively to some other subtropical eucalypts, there was a low incidence of borer attack. The major grade limiting defects for both medium and high feature grade boards recovered from the BAC site were knots and wane. The presence of large knots may reflect both management practices and the nature of the genetic material at the site. This stand was not managed for timber production with a very late pruning implemented at about age 12 years. The large amount of wane affected boards is indicative of logs with a large taper and the presence of significant sweep. Wane, knots and skip were the major grade limiting defects for the NT site reflecting considerable amounts of sweep with large taper as might be expected in younger trees. The green veneer recovered from billets of seven Khaya trees rotary peeled on a spindleless lathe produced a recovery of 83% of green billet volume. Dried veneer recovery ranged from 40 to 74 % per billet with an average of 64%. All of the recovered grades were suitable for use in structural ply in accordance to AS/NZ 2269: 2008. The majority of veneer sheets recovered from all billets was C grade (27%) with 20% making D grade and 13% B grade. Total dry sliced veneer recovery from the logs of the two largest logs from each location was estimated to be 41.1%. Very positive results have been recorded in this small scale study. The amount of colour development observed and the very reasonable recoveries of both sawn and veneer products, with a good representation of higher grades in the product distribution, is encouraging. The prospects for significant improvement in these results from well managed and productive stands grown for high quality timber should be high. Additionally, the study has shown the utility of non-destructive evaluation techniques for use in tree improvement programs to improve the quality of future plantations. A few trees combined several of the traits desired of individuals for a first breeding population. Fortunately, the two most promising trees (32, 19) had already been selected for breeding on external traits, and grafts of them are established in the seed orchard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13, 377. (C) 2010 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3506216]