871 resultados para value stream analysis
Resumo:
In this paper, we discuss the mathematical aspects of the Heisenberg uncertainty principle within local fractional Fourier analysis. The Schrödinger equation and Heisenberg uncertainty principles are structured within local fractional operators.
Resumo:
WiDom is a wireless prioritized medium access control protocol which offers a very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. One schedulability analysis for WiDom has already been proposed but recent research has created a new version of WiDom (we call it: Slotted WiDom) with lower overhead and for this version of WiDom no schedulability analysis exists. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work also for message streams with release jitter. We have performed experiments with an implementation of slotted WiDom on a real-world platform (MicaZ). We find that for each message stream, the maximum observed response time never exceeds the calculated response time and hence this corroborates our belief that our new scheduling theory is applicable in practice.
Resumo:
The foot and the ankle are small structures commonly affected by disorders, and their complex anatomy represent significant diagnostic challenges. SPECT/CT Image fusion can provide missing anatomical and bone structure information to functional imaging, which is particularly useful to increase diagnosis certainty of bone pathology. However, due to SPECT acquisition duration, patient’s involuntary movements may lead to misalignment between SPECT and CT images. Patient motion can be reduced using a dedicated patient support. We aimed at designing an ankle and foot immobilizing device and measuring its efficacy at improving image fusion. Methods: We enrolled 20 patients undergoing distal lower-limb SPECT/CT of the ankle and the foot with and without a foot holder. The misalignment between SPECT and CT images was computed by manually measuring 14 fiducial markers chosen among anatomical landmarks also visible on bone scintigraphy. Analysis of variance was performed for statistical analysis. Results: The obtained absolute average difference without and with support was 5.1±5.2 mm (mean±SD) and 3.1±2.7 mm, respectively, which is significant (p<0.001). Conclusion: The introduction of the foot holder significantly decreases misalignment between SPECT and CT images, which may have clinical influence in the precise localization of foot and ankle pathology.
Resumo:
The Iberian viticultural regions are convened according to the Denomination of Origin (DO) and present different climates, soils, topography and management practices. All these elements influence the vegetative growth of different varieties throughout the peninsula, and are tied to grape quality and wine type. In the current study, an integrated analysis of climate, soil, topography and vegetative growth was performed for the Iberian DO regions, using state-of-the-art datasets. For climatic assessment, a categorized index, accounting for phenological/thermal development, water availability and grape ripening conditions was computed. Soil textural classes were established to distinguish soil types. Elevation and aspect (orientation) were also taken into account, as the leading topographic elements. A spectral vegetation index was used to assess grapevine vegetative growth and an integrated analysis of all variables was performed. The results showed that the integrated climate-soil-topography influence on vine performance is evident. Most Iberian vineyards are grown in temperate dry climates with loamy soils, presenting low vegetative growth. Vineyards in temperate humid conditions tend to show higher vegetative growth. Conversely, in cooler/warmer climates, lower vigour vineyards prevail and other factors, such as soil type and precipitation acquire more important roles in driving vigour. Vines in prevailing loamy soils are grown over a wide climatic diversity, suggesting that precipitation is the primary factor influencing vigour. The present assessment of terroir characteristics allows direct comparison among wine regions and may have great value to viticulturists, particularly under a changing climate.
Resumo:
Purpose: To compare image quality and effective dose when the 10 kVp rule is applied with manual and AEC mode in PA chest X-ray. Methods and Materials: A total of 68 images (with and without lesions) were acquired of an anthropomorphic chest phantom in a Wolverson Arcoma X-ray unit. The images were evaluated against a reference image using image quality criteria and the 2 alternative forced choice (2 AFC) method by five radiographers. The effective dose was calculated using PCXMC software using the exposure parameters and DAP. The exposure index (lgM) was recorded. Results: Exposure time decreases considerably when applying the 10 kVp rule in manual mode (50%-28%) compared to AEC mode (36%-23%). Statistical differences for effective dose between several AEC modes were found (p=0.002). The effective dose is lower when using only the right AEC ionization chamber. Considering image quality, there are no statistical differences (p=0.348) between the different AEC modes for images with no lesions. Using a higher kVp value the lgM values will also increase. The lgM values showed significant statistical differences (p=0.000). The image quality scores did not present statistically significant differences (p=0.043) for the images with lesions when comparing manual with AEC modes. Conclusion: In general, the dose is lower in the manual mode. By using the right AEC ionising chamber the effective dose will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the detectability of the lesions.
Resumo:
Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Resumo:
The latest medical diagnosis devices enable the performance of e-diagnosis making the access to these services easier, faster and available in remote areas. However this imposes new communications and data interchange challenges. In this paper a new XML based format for storing cardiac signals and related information is presented. The proposed structure encompasses data acquisition devices, patient information, data description, pathological diagnosis and waveform annotation. When compared with similar purpose formats several advantages arise. Besides the full integrated data model it may also be noted the available geographical references for e-diagnosis, the multi stream data description, the ability to handle several simultaneous devices, the possibility of independent waveform annotation and a HL7 compliant structure for common contents. These features represent an enhanced integration with existent systems and an improved flexibility for cardiac data representation.
Resumo:
Associations between socio-demographic factors, water contact patterns and Schistosoma mansoni infection were investigated in 506 individuals (87% of inhabitants over 1 year of age) in an endemic area in Brazil (Divino), aiming at determining priorities for public health measures to prevent the infection. Those who eliminated S. mansoni eggs (n = 198) were compared to those without eggs in the stools (n = 308). The following explanatory variables were considered: age, sex, color, previous treatment with schistosomicide, place of birth, quality of the houses, water supply for the household, distance from houses to stream, and frequency and reasons for water contact. Factors found to be independently associated with the infection were age (10-19 and > 20 yrs old), and water contact for agricultural activities, fishing, and swimming or bathing (Adjusted relative odds = 5.0, 2.4, 3.2, 2.1 and 2.0, respectively). This suggests the need for public health measures to prevent the infection, emphasizing water contact for leisure and agricultural activities in this endemic area.
Resumo:
The mineral content (phosphorous (P), potassium (K), sodium (Na), calcium (Ca), magnesium (Mg), iron (Fe), manganese (Mn), zinc (Zn) and copper (Cu)) of eight ready-to-eat baby leaf vegetables was determined. The samples were subjected to microwave-assisted digestion and the minerals were quantified by High-Resolution Continuum Source Atomic Absorption Spectrometry (HR-CS-AAS) with flame and electrothermal atomisation. The methods were optimised and validated producing low LOQs, good repeatability and linearity, and recoveries, ranging from 91% to 110% for the minerals analysed. Phosphorous was determined by a standard colorimetric method. The accuracy of the method was checked by analysing a certified reference material; results were in agreement with the quantified value. The samples had a high content of potassium and calcium, but the principal mineral was iron. The mineral content was stable during storage and baby leaf vegetables could represent a good source of minerals in a balanced diet. A linear discriminant analysis was performed to compare the mineral profile obtained and showed, as expected, that the mineral content was similar between samples from the same family. The Linear Discriminant Analysis was able to discriminate different samples based on their mineral profile.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
This study aims to analyze which determinants predict frailty in general and each frailty domain (physical, psychological, and social), considering the integral conceptual model of frailty, and particularly to examine the contribution of medication in this prediction. A cross-sectional study was designed using a non-probabilistic sample of 252 community-dwelling elderly from three Portuguese cities. Frailty and determinants of frailty were assessed with the Tilburg Frailty Indicator. The amount and type of different daily-consumed medication were also examined. Hierarchical regression analysis were conducted. The mean age of the participants was 79.2 years (±7.3), and most of them were women (75.8%), widowed (55.6%) and with a low educational level (0–4 years: 63.9%). In this study, determinants explained 46% of the variance of total frailty, and 39.8, 25.3, and 27.7% of physical, psychological, and social frailty respectively. Age, gender, income, death of a loved one in the past year, lifestyle, satisfaction with living environment and self-reported comorbidity predicted total frailty, while each frailty domain was associated with a different set of determinants. The number of daily-consumed drugs was independently associated with physical frailty, and the consumption of medication for the cardiovascular system and for the blood and blood-forming organs explained part of the variance of total and physical frailty. The adverse effects of polymedication and its direct link with the level of comorbidities could explain the independent contribution of the amount of prescribed drugs to frailty prediction. On the other hand, findings in regard to medication type provide further evidence of the association of frailty with cardiovascular risk. In the present study, a significant part of frailty was predicted, and the different contributions of each determinant to frailty domains highlight the relevance of the integral model of frailty. The added value of a simple assessment of medication was considerable, and it should be taken into account for effective identification of frailty.
Resumo:
Evolução, ato ou efeito de evoluir, sequência de transformações, desenvolvimento progressivo. Se tudo à nossa volta se transforma, a indústria tem de acompanhar esse sistema evolutivo, tornando assim imprescindível alterar ou melhorar processos de produção quando estes não se enquadram com a realidade, ou porque o mercado se altera, ou porque as necessidades mudam, ou por simplesmente ser mais rentável. Sendo a Galp Energia uma empresa que se encontra sempre na vanguarda da evolução tecnológica, encontra no Departamento de Engenharia Química do Instituto Superior de Engenharia do Porto um aliado na procura do melhor modo de valorizar os seus produtos. A Refinaria de Matosinhos tem atualmente duas correntes de gasolina leve e uma de refinado que apresentam grande potencialidade de valorização. Parte destas correntes incorporam atualmente a pool de nafta química da refinaria que é vendida à Repsol Polímeros. O desafio que é proposto baseia-se em valorizar essas correntes através da sua isomerização aumentando o seu RON podendo então ter como fim a pool de gasolinas. Tirando partido da tecnologia disponível para este efeito são apresentados quatro cenários de possíveis soluções. Sendo os dois primeiros excluídos por violarem restrições impostas, o terceiro e quarto cenários foram analisados de um ponto de vista económico. O terceiro cenário conduz a gasolina leve da Fábrica de Aromáticos para a pool de gasolinas sem qualquer tratamento e a gasolina leve da Fábrica de Combustíveis continua a integrar a pool de nafta química. O refinado da Fábrica de Aromáticos será enviado para um splitter, sendo a corrente de topo destinada à pool de nafta química e a corrente de fundo enviada a um reator de isomerização, Isomalk-4SM, passando previamente por uma torre de argila de forma a assegurar que a restrição em teor de olefinas no reator não é violada. O efluente, com RON maior, integrará igualmente a pool de gasolinas. No quarto cenário a corrente de refinado da Fábrica de Aromáticos não sofre qualquer tratamento, continuando a alimentar a unidade de solventes, a gasolina leve da Fábrica de Aromáticos irá diretamente para a pool de gasolinas e a gasolina leve da Fábrica de Combustíveis passará pelo Isomalk-2SM para aumentar o índice de octanos garantido assim ter condições de integrar a pool de gasolinas. Dissertação de Mestrado em Engenharia Química Isomerização de Gasolina Leve O terceiro cenário apresenta um aumento de 4 576 773 € anuais nas receitas e o quarto alcança 11 333 982 € anuais. O investimento inicial total do terceiro cenário é de 28 821 608 € quando o quarto cenário carece de um investimento inicial de apenas 18 028 349 €. Quanto aos custos associados à implementação da unidade estes demonstram-se elevados, o terceiro cenário apresenta um custo de 23 133 429 € enquanto o do quarto cenário é de 13 998 797 €. O quarto cenário apresenta-se assim como a solução mais rentável para o objetivo desta dissertação.
Resumo:
Presented at 23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.