966 resultados para Fixed Block size Transform Coding
Resumo:
^len^lpt^aOBJETIVO: Descrever a prevalência de pressão arterial limítrofe (PAL) e hipertensão (HT) entre adultos jovens e avaliar a associação entre tamanho ao nascer e PAL/HT. MÉTODOS: Dados foram coletados do primeiro estudo brasileiro de coorte de nascimentos em Ribeirão Preto (sudeste do Brasil), iniciado em 1978/79. De 6.827 recém-nascidos de parto único hospitalar, 2.060 foram avaliados aos 23/25 anos. Foram realizadas coleta de sangue, avaliação antropométrica e obtidas informações sobre ocupação, escolaridade, hábitos de vida e doenças crônicas. Pressão arterial (PA) foi classificada em: 1) PAL: PA sistólica (PAS) ≥ 130 e < 140 mm Hg e/ou PA diastólica (PAD) ≥ 85 e < 90 mmHg; 2) HT: PAS ≥ 140 e/ou PAD ≥ 90 mm Hg. Foi aplicado modelo de regressão logística politômica. RESULTADOS: A prevalência de PAL foi de 13,5% (homens 23,2%) e a de HT, 9,5% (homens 17,7%). PAL foi independentemente associada com sexo masculino (RR 8,84; IC95%: 6,09;12,82), comprimento ao nascer ≥ 50 cm (RR 1,97; 1,04; 3,73), índice de massa corporal (IMC) ≥ 30 kg/m² (RR 3,23; 2,02; 5,15) e circunferência de cintura alterada (RR 1,61; 1,13;2,29), enquanto HT associou-se com sexo masculino (RR 15,18; 8,92;25,81), IMC ≥ 30 kg/m² (RR 3,68; 2,23;6,06), circunferência de cintura alterada (RR 2,68; 1,77;4,05) e glicemia elevada (RR 2,55; 1,27;5,10), mas não com comprimento ao nascer. CONCLUSÕES: As prevalências de PAL e HT entre os adultos jovens dessa coorte foram maiores em homens que em mulheres. Maior comprimento ao nascer foi associado com PAL, mas não com HT, enquanto peso ao nascer não foi associado com PAL ou HT. Fatores de risco do adulto explicaram a maioria dos aumentos de PAL ou HT.^les^aOBJETIVO: Describir la prevalencia de presión arterial limítrofe (PAL) e hipertensión (HT) entre adultos jóvenes y evaluar la asociación entre tamaño al nacer y PAL/HT. MÉTODOS: : Los datos fueron colectados en el primer estudio de cohorte de nacimientos brasileño en Ribeirao Preto (sureste de Brasil), iniciado en 1978/79. De 6.827 recién nacidos de parto único hospitalario, 2.060 fueron evaluados a los 23/25 años. Se realizaron colecta de sangre, evaluación antropométrica y obtenidas informaciones sobre ocupación, escolaridad, hábitos de vida y enfermedades crónicas. Presión arterial (PA) fue clasificada en: 1) PAL: PA sistólica (PAS) ≥ 130 y < 140 mm Hg y/o PA diastólica (PAD) ≥ 85 y < 90 mm Hg; 2) HT: PAS ≥ 140 y/o PAD ≥ 90 mm Hg. Se aplicó modelo de regresión logística politómica. RESULTADOS: La prevalencia de PAL fue de 13,5% (hombres 23,2%) y la de HT, 9,5% (hombres 17,7%). PAL fue independientemente asociada con sexo masculino (Riesgo Relativo - RR) 8,84; 95%IC: 6,09;12,82), estatura al nacer ≥ 50 cm (RR 1,97; 1,04; 3,73), índice de masa corporal (IMC) ≥ 30 kg/m2 (RR 3,23; 2,02; 5,15) y circunferencia de cintura alterada (RR 1,61; 1,13;2,29), mientras el HT se asoció con sexo masculino (RR 15,18; 8,92;25,81), IMC ≥ 30 kg/m2 (RR 3,68; 2,23;6,06), circunferencia de cintura alterada (RR 2,68; 1,77;4,05) y glicemia elevada (RR 2,55; 1,27;5,10), pero no con estatura al nacer. CONCLUSIONES: Las prevalencias de PAL y HT entre los adultos jóvenes de la cohorte fueron mayores en hombres que en mujeres. Mayor estatura al nacer fue asociado con PAL, pero no con HT, mientras que el peso al nacer no estuvo asociado con PAL o HT. Factores de riesgo de adulto explicaron la mayoría de los aumentos de PAL o HT.
Resumo:
In this paper we address the real-time capabilities of P-NET, which is a multi-master fieldbus standard based on a virtual token passing scheme. We show how P-NET’s medium access control (MAC) protocol is able to guarantee a bounded access time to message requests. We then propose a model for implementing fixed prioritybased dispatching mechanisms at each master’s application level. In this way, we diminish the impact of the first-come-first-served (FCFS) policy that P-NET uses at the data link layer. The proposed model rises several issues well known within the real-time systems community: message release jitter; pre-run-time schedulability analysis in non pre-emptive contexts; non-independence of tasks at the application level. We identify these issues in the proposed model and show how results available for priority-based task dispatching can be adapted to encompass priority-based message dispatching in P-NET networks.
Resumo:
While the earliest deadline first algorithm is known to be optimal as a uniprocessor scheduling policy, the implementation comes at a cost in terms of complexity. Fixed taskpriority algorithms on the other hand have lower complexity but higher likelihood of task sets being declared unschedulable, when compared to earliest deadline first (EDF). Various attempts have been undertaken to increase the chances of proving a task set schedulable with similar low complexity. In some cases, this was achieved by modifying applications to limit preemptions, at the cost of flexibility. In this work, we explore several variants of a concept to limit interference by locking down the ready queue at certain instances. The aim is to increase the prospects of schedulability of a given task system, without compromising on complexity or flexibility, when compared to the regular fixed task-priority algorithm. As a final contribution, a new preemption threshold assignment algorithm is provided which is less complex and more straightforward than the previous method available in the literature.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
In this paper, the fractional Fourier transform (FrFT) is applied to the spectral bands of two component mixture containing oxfendazole and oxyclozanide to provide the multicomponent quantitative prediction of the related substances. With this aim in mind, the modulus of FrFT spectral bands are processed by the continuous Mexican Hat family of wavelets, being denoted by MEXH-CWT-MOFrFT. Four modulus sets are obtained for the parameter a of the FrFT going from 0.6 up to 0.9 in order to compare their effects upon the spectral and quantitative resolutions. Four linear regression plots for each substance were obtained by measuring the MEXH-CWT-MOFrFT amplitudes in the application of the MEXH family to the modulus of the FrFT. This new combined powerful tool is validated by analyzing the artificial samples of the related drugs, and it is applied to the quality control of the commercial veterinary samples.
Resumo:
The goal of this study is to analyze the dynamical properties of financial data series from nineteen worldwide stock market indices (SMI) during the period 1995–2009. SMI reveal a complex behavior that can be explored since it is available a considerable volume of data. In this paper is applied the window Fourier transform and methods of fractional calculus. The results reveal classification patterns typical of fractional order systems.
Resumo:
In this paper we consider global fixed-priority preemptive multiprocessor scheduling of constrained-deadline sporadic tasks that share resources in a non-nested manner. We develop a novel resource-sharing protocol and a corresponding schedulability test for this system. We also develop the first schedulability analysis of priority inheritance protocol for the aforementioned system. Finally, we show that these protocols are efficient (based on the developed schedulability tests) for a class of priority-assignments called reasonable priority-assignments.
Resumo:
Consider global fixed-priority preemptive multiprocessor scheduling of implicit-deadline sporadic tasks. I conjecture that the utilization bound of SM-US(√2−1) is √2-1.
Resumo:
In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.
Resumo:
Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Resumo:
As high dynamic range video is gaining popularity, video coding solutions able to efficiently provide both low and high dynamic range video, notably with a single bitstream, are increasingly important. While simulcasting can provide both dynamic range videos at the cost of some compression efficiency penalty, bit-depth scalable video coding can provide a better trade-off between compression efficiency, adaptation flexibility and computational complexity. Considering the widespread use of H.264/AVC video, this paper proposes a H.264/AVC backward compatible bit-depth scalable video coding solution offering a low dynamic range base layer and two high dynamic range enhancement layers with different qualities, at low complexity. Experimental results show that the proposed solution has an acceptable rate-distortion performance penalty regarding the HDR H.264/AVC single-layer coding solution.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The goal of this study is the analysis of the dynamical properties of financial data series from worldwide stock market indexes during the period 2000–2009. We analyze, under a regional criterium, ten main indexes at a daily time horizon. The methods and algorithms that have been explored for the description of dynamical phenomena become an effective background in the analysis of economical data. We start by applying the classical concepts of signal analysis, fractional Fourier transform, and methods of fractional calculus. In a second phase we adopt the multidimensional scaling approach. Stock market indexes are examples of complex interacting systems for which a huge amount of data exists. Therefore, these indexes, viewed from a different perspectives, lead to new classification patterns.
Resumo:
Though the formal mathematical idea of introducing noninteger order derivatives can be traced from the 17th century in a letter by L’Hospital in which he asked Leibniz what the meaning of D n y if n = 1/2 would be in 1695 [1], it was better outlined only in the 19th century [2, 3, 4]. Due to the lack of clear physical interpretation their first applications in physics appeared only later, in the 20th century, in connection with visco-elastic phenomena [5, 6]. The topic later obtained quite general attention [7, 8, 9], and also found new applications in material science [10], analysis of earth-quake signals [11], control of robots [12], and in the description of diffusion [13], etc.
Resumo:
A robot’s drive has to exert appropriate driving forces that can keep its arm and end effector at the proper position, velocity and acceleration, and simultaneously has to compensate for the effects of the contact forces arising between the tool and the workpiece depending on the needs of the actual technological operation. Balancing the effects of a priori unknown external disturbance forces and the inaccuracies of the available dynamic model of the robot is also important. Technological tasks requiring well prescribed end effector trajectories and contact forces simultaneously are challenging control problems that can be tackled in various manners.