967 resultados para Optimal frame-level timing estimator


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O diagnóstico precoce e o tratamento adequado dos casos de malária é a principal estratégia para o controle da doença. Várias alternativas para o diagnóstico microscópico tradicional foram propostas nos últimos anos, os testes imunocromatográficos que capturam antígenos alvos dos parasitos da malária estão sendo propostos, como o teste OptiMAL-IT® que detecta a desidrogenase lática do Plasmodium sp.. O estudo teve como objetivo a avaliação do nível de concordância entre o teste imunocromatográfico (OptiMAL-IT®) e a gota espessa para o diagnóstico da malária no Município de Mazagão – Amapá. Foram analisados 413 indivíduos com sintomatologia de malária, que procuraram o serviço da Unidade Mista de Saúde de Mazagão, com idade entre 01-68 anos. Os resultados do teste OptiMAL-IT® foram comparados com os resultados obtidos (das amostras) através da gota espessa corada pelo Giemsa. Dos 413 pacientes suspeitos de apresentarem malária, 317(76.8%) eram positivos através da GE e 311 (75.3%) eram positivos pelo TDR. Das lâminas de GE positivas, foram encontrados 27.4% de P. falciparum e 72.6% de P. vivax. O teste OptiMAL-IT® detectou 27.7% de P. falciparum e 72.3% de P. vivax. A sensibilidade obtida com o TDR para o P. falciparum foi de 97.7% e para o P. vivax foi de 98.2%, a sensibilidade global do TDR foi de 98.1% e a especificidade global e para ambas as espécies foi de 100%. Foram encontrados valores preditivos positivos e negativos de 100% e 94.1%, respectivamente. O teste OptiMAL-IT®, teve uma alta concordância com a GE, foi específico e eficiente, podendo ser usado no diagnóstico de malária nas situações onde a microscopia não está disponível.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survivable traffic grooming (STG) is a promising approach to provide reliable and resource-efficient multigranularity connection services in wavelength-division-multiplexing (WDM) optical networks. In this paper, we study the STG problem in WDM mesh optical networks employing path protection at the connection level. Both dedicated-protection and shared-protection schemes are considered. Given network resources, the objective of the STG problem is to maximize network throughput. To enable survivability under various kinds of single failures, such as fiber cut and duct cut, we consider the general shared-risklink- group (SRLG) diverse routing constraints. We first resort to the integer-linear-programming (ILP) approach to obtain optimal solutions. To address its high computational complexity, we then propose three efficient heuristics, namely separated survivable grooming algorithm (SSGA), integrated survivable grooming algorithm (ISGA), and tabu-search survivable grooming algorithm (TSGA). While SSGA and ISGA correspond to an overlay network model and a peer network model, respectively, TSGA further improves the grooming results from SSGA and ISGA by incorporating the effective tabu-search (TS) method. Numerical results show that the heuristics achieve comparable solutions to the ILP approach, which uses significantly longer running times than the heuristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survivable traffic grooming (STG) is a promising approach to provide reliable and resource-efficient multigranularity connection services in wavelength division multiplexing (WDM) optical networks. In this paper, we study the STG problem in WDM mesh optical networks employing path protection at the connection level. Both dedicated protection and shared protection schemes are considered. Given the network resources, the objective of the STG problem is to maximize network throughput. To enable survivability under various kinds of single failures such as fiber cut and duct cut, we consider the general shared risk link group (SRLG) diverse routing constraints. We first resort to the integer linear programming (ILP) approach to obtain optimal solutions. To address its high computational complexity, we then propose three efficient heuristics, namely separated survivable grooming algorithm (SSGA), integrated survivable grooming algorithm (ISGA) and tabu search survivable grooming algorithm (TSGA). While SSGA and ISGA correspond to an overlay network model and a peer network model respectively, TSGA further improves the grooming results from SSGA and ISGA by incorporating the effective tabu search method. Numerical results show that the heuristics achieve comparable solutions to the ILP approach, which uses significantly longer running times than the heuristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global attractor of a gradient-like semigroup has a Morse decomposition. Associated to this Morse decomposition there is a Lyapunov function (differentiable along solutions)-defined on the whole phase space- which proves relevant information on the structure of the attractor. In this paper we prove the continuity of these Lyapunov functions under perturbation. On the other hand, the attractor of a gradient-like semigroup also has an energy level decomposition which is again a Morse decomposition but with a total order between any two components. We claim that, from a dynamical point of view, this is the optimal decomposition of a global attractor; that is, if we start from the finest Morse decomposition, the energy level decomposition is the coarsest Morse decomposition that still produces a Lyapunov function which gives the same information about the structure of the attractor. We also establish sufficient conditions which ensure the stability of this kind of decomposition under perturbation. In particular, if connections between different isolated invariant sets inside the attractor remain under perturbation, we show the continuity of the energy level Morse decomposition. The class of Morse-Smale systems illustrates our results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recent reviews have indicated that low level level laser therapy (LLLT) is ineffective in lateral elbow tendinopathy (LET) without assessing validity of treatment procedures and doses or the influence of prior steroid injections. Methods Systematic review with meta-analysis, with primary outcome measures of pain relief and/or global improvement and subgroup analyses of methodological quality, wavelengths and treatment procedures. Results 18 randomised placebo-controlled trials (RCTs) were identified with 13 RCTs (730 patients) meeting the criteria for meta-analysis. 12 RCTs satisfied half or more of the methodological criteria. Publication bias was detected by Egger's graphical test, which showed a negative direction of bias. Ten of the trials included patients with poor prognosis caused by failed steroid injections or other treatment failures, or long symptom duration or severe baseline pain. The weighted mean difference (WMD) for pain relief was 10.2 mm [95% CI: 3.0 to 17.5] and the RR for global improvement was 1.36 [1.16 to 1.60]. Trials which targeted acupuncture points reported negative results, as did trials with wavelengths 820, 830 and 1064 nm. In a subgroup of five trials with 904 nm lasers and one trial with 632 nm wavelength where the lateral elbow tendon insertions were directly irradiated, WMD for pain relief was 17.2 mm [95% CI: 8.5 to 25.9] and 14.0 mm [95% CI: 7.4 to 20.6] respectively, while RR for global pain improvement was only reported for 904 nm at 1.53 [95% CI: 1.28 to 1.83]. LLLT doses in this subgroup ranged between 0.5 and 7.2 Joules. Secondary outcome measures of painfree grip strength, pain pressure threshold, sick leave and follow-up data from 3 to 8 weeks after the end of treatment, showed consistently significant results in favour of the same LLLT subgroup (p < 0.02). No serious side-effects were reported. Conclusion LLLT administered with optimal doses of 904 nm and possibly 632 nm wavelengths directly to the lateral elbow tendon insertions, seem to offer short-term pain relief and less disability in LET, both alone and in conjunction with an exercise regimen. This finding contradicts the conclusions of previous reviews which failed to assess treatment procedures, wavelengths and optimal doses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] This paper proposes the incorporation of engineering knowledge through both (a) advanced state-of-the-art preference handling decision-making tools integrated in multiobjective evolutionary algorithms and (b) engineering knowledge-based variance reduction simulation as enhancing tools for the robust optimum design of structural frames taking uncertainties into consideration in the design variables.The simultaneous minimization of the constrained weight (adding structuralweight and average distribution of constraint violations) on the one hand and the standard deviation of the distribution of constraint violation on the other are handled with multiobjective optimization-based evolutionary computation in two different multiobjective algorithms. The optimum design values of the deterministic structural problem in question are proposed as a reference point (the aspiration level) in reference-point-based evolutionary multiobjective algorithms (here g-dominance is used). Results including

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents and discusses TEDA, an algorithm for the automatic detection in real-time of tsunamis and large amplitude waves on sea level records. TEDA has been developed in the frame of the Tsunami Research Team of the University of Bologna for coastal tide gauges and it has been calibrated and tested for the tide gauge station of Adak Island, in Alaska. A preliminary study to apply TEDA to offshore buoys in the Pacific Ocean is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern stratigraphy of clastic continental margins is the result of the interaction between several geological processes acting on different time scales, among which sea level oscillations, sediment supply fluctuations and local tectonics are the main mechanisms. During the past three years my PhD was focused on understanding the impact of each of these process in the deposition of the central and northern Adriatic sedimentary successions, with the aim of reconstructing and quantifying the Late Quaternary eustatic fluctuations. In the last few decades, several Authors tried to quantify past eustatic fluctuations through the analysis of direct sea level indicators, among which drowned barrier-island deposits or coral reefs, or indirect methods, such as Oxygen isotope ratios (δ18O) or modeling simulations. Sea level curves, obtained from direct sea level indicators, record a composite signal, formed by the contribution of the global eustatic change and regional factors, as tectonic processes or glacial-isostatic rebound effects: the eustatic signal has to be obtained by removing the contribution of these other mechanisms. To obtain the most realistic sea level reconstructions it is important to quantify the tectonic regime of the central Adriatic margin. This result has been achieved integrating a numerical approach with the analysis of high-resolution seismic profiles. In detail, the subsidence trend obtained from the geohistory analysis and the backstripping of the borehole PRAD1.2 (the borehole PRAD1.2 is a 71 m continuous borehole drilled in -185 m of water depth, south of the Mid Adriatic Deep - MAD - during the European Project PROMESS 1, Profile Across Mediterranean Sedimentary Systems, Part 1), has been confirmed by the analysis of lowstand paleoshorelines and by benthic foraminifera associations investigated through the borehole. This work showed an evolution from inner-shelf environment, during Marine Isotopic Stage (MIS) 10, to upper-slope conditions, during MIS 2. Once the tectonic regime of the central Adriatic margin has been constrained, it is possible to investigate the impact of sea level and sediment supply fluctuations on the deposition of the Late Pleistocene-Holocene transgressive deposits. The Adriatic transgressive record (TST - Transgressive Systems Tract) is formed by three correlative sedimentary bodies, deposited in less then 14 kyr since the Last Glacial Maximum (LGM); in particular: along the central Adriatic shelf and in the adjacent slope basin the TST is formed by marine units, while along the northern Adriatic shelf the TST is represented by costal deposits in a backstepping configuration. The central Adriatic margin, characterized by a thick transgressive sedimentary succession, is the ideal site to investigate the impact of late Pleistocene climatic and eustatic fluctuations, among which Meltwater Pulses 1A and 1B and the Younger Dryas cold event. The central Adriatic TST is formed by a tripartite deposit bounded by two regional unconformities. In particular, the middle TST unit includes two prograding wedges, deposited in the interval between the two Meltwater Pulse events, as highlighted by several 14C age estimates, and likely recorded the Younger Dryas cold interval. Modeling simulations, obtained with the two coupled models HydroTrend 3.0 and 2D-Sedflux 1.0C (developed by the Community Surface Dynamics Modeling System - CSDMS), integrated by the analysis of high resolution seismic profiles and core samples, indicate that: 1 - the prograding middle TST unit, deposited during the Younger Dryas, was formed as a consequence of an increase in sediment flux, likely connected to a decline in vegetation cover in the catchment area due to the establishment of sub glacial arid conditions; 2 - the two-stage prograding geometry was the consequence of a sea level still-stand (or possibly a fall) during the Younger Dryas event. The northern Adriatic margin, characterized by a broad and gentle shelf (350 km wide with a low angle plunge of 0.02° to the SE), is the ideal site to quantify the timing of each steps of the post LGM sea level rise. The modern shelf is characterized by sandy deposits of barrier-island systems in a backstepping configuration, showing younger ages at progressively shallower depths, which recorded the step-wise nature of the last sea level rise. The age-depth model, obtained by dated samples of basal peat layers, is in good agreement with previous published sea level curves, and highlights the post-glacial eustatic trend. The interval corresponding to the Younger Dyas cold reversal, instead, is more complex: two coeval coastal deposits characterize the northern Adriatic shelf at very different water depths. Several explanations and different models can be attempted to explain this conundrum, but the problem remains still unsolved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To compare clinical outcomes after laparoscopic cholecystectomy (LC) for acute cholecystitis performed at various time-points after hospital admission. Background: Symptomatic gallstones represent an important public health problem with LC the treatment of choice. LC is increasingly offered for acute cholecystitis, however, the optimal time-point for LC in this setting remains a matter of debate. Methods: Analysis was based on the prospective database of the Swiss Association of Laparoscopic and Thoracoscopic Surgery and included patients undergoing emergency LC for acute cholecystitis between 1995 and 2006, grouped according to the time-points of LC since hospital admission (admission day (d0), d1, d2, d3, d4/5, d ≥6). Linear and generalized linear regression models assessed the effect of timing of LC on intra- or postoperative complications, conversion and reoperation rates and length of postoperative hospital stay. Results: Of 4113 patients, 52.8% were female, median age was 59.8 years. Delaying LC resulted in significantly higher conversion rates (from 11.9% at d0 to 27.9% at d ≥6 days after admission, P < 0.001), surgical postoperative complications (5.7% to 13%, P < 0.001) and re-operation rates (0.9% to 3%, P = 0.007), with a significantly longer postoperative hospital stay (P < 0.001). Conclusions: Delaying LC for acute cholecystitis has no advantages, resulting in significantly increased conversion/re-operation rate, postoperative complications and longer postoperative hospital stay. This investigation—one of the largest in the literature—provides compelling evidence that acute cholecystitis merits surgery within 48 hours of hospital admission if impact on the patient and health care system is to be minimized.