970 resultados para Univariate Analysis box-jenkins methodology
Resumo:
The industrial activity is inevitably associated with a certain degradation of the environmental quality, because is not possible to guarantee that a manufacturing process can be totally innocuous. The eco-efficiency concept is globally accepted as a philosophy of entreprise management, that encourages the companies to become more competitive, innovative and environmentally responsible by promoting the link between its companies objectives for excellence and its objectives of environmental excellence issues. This link imposes the creation of an organizational methodology where the performance of the company is concordant with the sustainable development. The main propose of this project is to apply the concept of eco-efficiency to the particular case of the metallurgical and metal workshop industries through the development of the particular indicators needed and to produce a manual of procedures for implementation of the accurate solution.
Resumo:
This paper analyzes Knowledge Management (KM) as a political activity, made by the great political leaders of the world. We try to inspect if at the macro political level KM is made, and how. The research is interesting because given that we live in a Knowledge society, in the Information Era, it is more or less obvious that the political leaders should also do KM. However we don’t know of any previous study on KM and world leaders and this paper wants to be a first step to fill that gap. As a methodology we use literature review: given this one is a first preliminary study we use data we found in the Internet and other databases like EBSCO. We divide the analysis in two main parts: theoretical ideas first, and an application second. The second part is it self divided in two segments: the past and the present times. We find that rather not surprisingly, KM always was and is pervasive in the activity of the world leaders, and has become more and more diverse has power itself became to be more and more disseminated in the world. The study has the limitation of relying on insights and texts and not on interviews. But we believe it is very interesting to make this kind of analysis and such studies may help improving the democracies in the world.
Resumo:
Introdução: O envelhecimento demográfico e o aumento da esperança de vida, criam condições para uma maior incidência de doenças degenerativas. Vários aspectos críticos envolvem a medicação no idoso, tais como: polimedicação frequente, com risco acrescido de ocorrência de reacções adversas, relacionadas com interacções medicamentosas e eventual medicação desaconselhada, em que o risco pode ser superior ao benefício. Estes aspectos são particularmente críticos no idoso hospitalizado. Objectivo: Este estudo teve como objectivo estimar a prevalência da polimedicação em idosos hospitalizados e analisar a medicação considerada inadequada nesta população. Participantes e metodologia: Seguiu-se um modelo retrospectivo descritivo transversal, reportando-se os dados a um período de um ano e meio, incidindo sobre o último internamento. A natureza da medicação, foi analisada de acordo com o Formulário Terapêutico Nacional, Resumo das Caracteristicas do Medicamento e com critérios de Beers-2002.Englobou 100 idosos (>65 anos) utentes do Hospital Cuf Descobertas, em regime de internamento. Os dados pessoais e clínicos e respectivo mapa farmacoterapêutico, foram introduzidos em base de dados construída para este estudo, em Access 2003 SP2. Procedeu-se à analise estatística (SPSS 13,0), descritiva, com cálculo de medidas de tendência central; análise univariada para todas as variáveis relevantes e análise bi-variada para quantificar a prevalência da polimedicação por sexo e grupo etário. Resultados:Dos doentes estudados (65-98 anos), maioritariamente femininos, 7 apresentavam 4 patologias em simultâneo, 13:3 patologias, 27:2 patologias e 30:1 patologia. Em 23 não se verificou qualquer patologia crónica. A hipertensão (n=49:27,5%) e a patologia cardiovascular (n=41:23%) foram as mais frequentemente encontradas na amostra em estudo sendo as de menor frequência a patologia reumática (n=1:0,56%), a osteoporose e os problemas psíquicos (n=2:1,12%. A prevalência de polimedicação foi de 84% e nº de medicamentos prescrito em simultâneo variou entre 2 e 23.Não se observou associação entre a polimedicação, a idade: e o sexo. Em apenas um caso foi identificado um medicamento desaconselhado em função do diagnóstico (metoclopramida:Parkinson), e independentemente do diagnóstico a amiodariona foi o mais frequente (25%), hidroxizina (22%), ticlopidina (2%) e cetorolac (1%). Conclusões: A polimedicação é um fenómeno muito frequente nos idosos hospitalizados; o número de medicamentos envolvidos pode ser elevado e a prevalência de medicamentos que requerem uma ponderação sobre o risco/benefício no idoso, indicia a vantagem da revisão da terapêutica, impondo-se a implementação de estratégias informativas sobre os mesmos. Background: The demographic aging and expansion of life expectancy create conditions for increased occurrence of degenerative illnesses. Several critical aspects involve the medication of the elderly, such as: frequent polipharmacy with increased occurrence of adverse drug reactions, related to medication interactions and inappropriate prescribing, in which the benefits can be inferior to the risks.These aspects are particularly critical in the hospitalized elderly. Aim: This study aimed to estimate the prevalence of polipharmacy in hospitalized elderly and to analyze the medication considered inappropriate in this population. Participants and Methodology: A cross sectional model was followed, in which the data used relate to a period of a year and a half, focussing on the last hospitalization. The nature of the medication was analysed according to National Therapeutic Formulary, Drug Characteristics Summary and according to Beers-2002.It considered 100 elderly (>65 years) hospitalized at Hospital Cuf Descobertas. The personal and clinical data and the corresponding pharmacotherapeutic registration were introduced in a database created for this study in Access 2003 SP2. Descriptive statistics was calculated trough SPSS 13,0,.Exploraty analysis consisted in measures of average and spread for all variable considered relevant and univariate and bivariate analysis to quantify the prevalence of polipharmacy by sex and age and to relate polipharmacy with inappropriate medication. Results: Of the patients studied (65-98 years), the majority were women, 7 presented 4 pathologies, 13:3 pathologies, 27:2 pathologies and 30:1 pathology. In 23 patients there was any chronic pathology. Hypertension (n=49:27,5%) and cardiovascular disease (n=41:23%) were the most frequent disease in our study, and the minimal values were observed in rheumatism (n=1:0,56%), osteoporosis and psychic disorders (n=2:1,12%. The prevalence of polipharmacy was of 84% and the amount of medication simultaneously prescribed varied between 2 and 23.No association was observed between polipharmacy and age or gender. In only one case inappropriate medication was identified concerning diagnosis (metoclopramid: Parkinson), and independent of diagnosis the amiodaron was the most frequent (25%), hydroxyzin (22%), ticlopidin (2%). and ketorolac (1%). Conclusions: Polipharmacy is very prevalent among elderly people admitted to the hospital; the number of inappropriate medication can also be very high and this evidence should be collected in order to accomplish good drug use reviews and informative strategies in the hospital setting.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure-volume curves and the pseudophase-plane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
The fractal geometry is used to model of a naturally fractured reservoir and the concept of fractional derivative is applied to the diffusion equation to incorporate the history of fluid flow in naturally fractured reservoirs. The resulting fractally fractional diffusion (FFD) equation is solved analytically in the Laplace space for three outer boundary conditions. The analytical solutions are used to analyze the response of a naturally fractured reservoir considering the anomalous behavior of oil production. Several synthetic examples are provided to illustrate the methodology proposed in this work and to explain the diffusion process in fractally fractured systems.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Master in Computer Science
Resumo:
The characteristics of school furniture are strongly associated with back and neck pain, referred by school-aged children. In Portugal, about 60% of the adolescents involved in a recent study reported having felt back pain at least once in the last three months. The aim of this study was to compare furniture sizes of the 2 types indicated for primary schools, within 9 schools, with the anthropometric characteristics of Portuguese students, in order to evaluate the mismatch between them. The sample consisted of 432 volunteer students. Regarding the methodology, 5 anthropometric measures were gathered, as well as 5 dimensions from the school furniture. For the evaluation of classroom furniture, a (mis)match criterion equation was defined. Results indicated that there is a significant mismatch between furniture dimensions and the anthropometric characteristics of the students.
Resumo:
Most of distribution generation and smart grid research works are dedicated to the study of network operation parameters, reliability among others. However, many of this research works usually uses traditional test systems such as IEEE test systems. This work proposes a voltage magnitude study in presence of fault conditions considering the realistic specifications found in countries like Brazil. The methodology considers a hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzyprobabilistic models and a remedial action algorithm which is based on optimal power flow. To illustrate the application of the proposed method, the paper includes a case study that considers a real 12 bus sub-transmission network.
Resumo:
Electric power networks, namely distribution networks, have been suffering several changes during the last years due to changes in the power systems operation, towards the implementation of smart grids. Several approaches to the operation of the resources have been introduced, as the case of demand response, making use of the new capabilities of the smart grids. In the initial levels of the smart grids implementation reduced amounts of data are generated, namely consumption data. The methodology proposed in the present paper makes use of demand response consumers’ performance evaluation methods to determine the expected consumption for a given consumer. Then, potential commercial losses are identified using monthly historic consumption data. Real consumption data is used in the case study to demonstrate the application of the proposed method.
Resumo:
Presented thesis at Faculdade de Ciências e Tecnologias, Universidade de Lisboa, to obtain the Master Degree in Conservation and Restoration of Textiles
Resumo:
Most of distributed generation and smart grid research works are dedicated to network operation parameters studies, reliability, etc. However, many of these works normally uses traditional test systems, for instance, IEEE test systems. This paper proposes voltage magnitude and reliability studies in presence of fault conditions, considering realistic conditions found in countries like Brazil. The methodology considers a hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models and a remedial action algorithm which is based on optimal power flow. To illustrate the application of the proposed method, the paper includes a case study that considers a real 12-bus sub-transmission network.
Resumo:
The current economic crisis has rushed even more the economists’ concerns to identify new directions for the sustainable development of the society. In this context, the human capital is crystallised as the key variable of the creative economy and of the knowledge-based society. As such, we have directed the research underlying this paper to identifying the most eloquent indicators of human capital to meet the demands of the knowledge-based society and sustainable development as well as towards achieving a comprehensive analysis of the human capital in the EU countries, respectively of a comparative analysis: Romania - Portugal. To carry out this paper, the methodology used is based on the interdisciplinary triangulation involving approaches from the perspective of human resource management, economy and economic statistics. The research techniques used consist of the content analysis and investigation of secondary data of international organisations accredited in the field of this research, such as: the United Nation Development Programme - Human Development Reports, World Bank - World Development Reports, International Labour Organisation, Eurostat, European Commission’s Eurobarometer surveys and reports on human capital. The research results emphasise both similarities and differences between the two countries under the comparative analysis and the main directions in which one has to invest for the development of human capital.
Resumo:
20th International Conference on Reliable Software Technologies - Ada-Europe 2015 (Ada-Europe 2015), 22 to 26, Jun, 2015, Madrid, Spain.
Resumo:
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene.