939 resultados para process model collection
Resumo:
Dissertation submitted to Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa for the achievement of Integrated Master´s degree in Industrial Management Engineering
Resumo:
We propose a 3-D gravity model for the volcanic structure of the island of Maio (Cape Verde archipelago) with the objective of solving some open questions concerning the geometry and depth of the intrusive Central Igneous Complex. A gravity survey was made covering almost the entire surface of the island. The gravity data was inverted through a non-linear 3-D approach which provided a model constructed in a random growth process. The residual Bouguer gravity field shows a single positive anomaly presenting an elliptic shape with a NWSE trending long axis. This Bouguer gravity anomaly is slightly off-centred with the island but its outline is concordant with the surface exposure of the Central Igneous Complex. The gravimetric modelling shows a high-density volume whose centre of mass is about 4500 m deep. With increasing depth, and despite the restricted gravimetric resolution, the horizontal sections of the model suggest the presence of two distinct bodies, whose relative position accounts for the elongated shape of the high positive Bouguer gravity anomaly. These bodies are interpreted as magma chambers whose coeval volcanic counterparts are no longer preserved. The orientation defined by the two bodies is similar to that of other structures known in the southern group of the Cape Verde islands, thus suggesting a possible structural control constraining the location of the plutonic intrusions.
Resumo:
One of the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions as it reduce the fuel mass availability. The impact of these management activities on soil physical and chemical properties varies according to the type of both soil and vegetation. Decisions in forest management plans are often based on the results obtained from soil-monitoring campaigns. Those campaigns are often man-labor intensive and expensive. In this paper we have successfully used the multivariate statistical technique Robust Principal Analysis Compounds (ROBPCA) to investigate on the sampling procedure effectiveness for two different methodologies, in order to reflect on the possibility of simplifying and reduce the sampling collection process and its auxiliary laboratory analysis work towards a cost-effective and competent forest soil characterization.
Resumo:
In this paper, we introduce an innovative course in the Portuguese Context, the Master's Course in “Integrated Didactics in Mother Tongue, Maths, Natural and Social Sciences”, taking place at the Lisbon School of Education and discussing in particular the results of the evaluation made by the students who attended the Curricular Unit - Integrated Didactics (CU-ID). This course was designed for in-service teachers of the first six years of schooling and intends to improve connections between different curriculum areas. In this paper, we start to present a few general ideas about curriculum development; to discuss the concept of integration; to present the principles and objectives of the course created as well as its structure; to describe the methodology used in the evaluation process of the above mentioned CU-ID. The results allow us to state that the students recognized, as positive features of the CU-ID, the presence in all sessions of two teachers simultaneously from different scientific areas, as well as invitations issued to specialists on the subject of integration and to other teachers that already promote forms of integration in schools. As negative features, students noted a lack of integrated purpose, applying simultaneously the four scientific areas of the course, and also indicated the need to be familiar with more models of integrated education. Consequently, the suggestions for improvement derived from these negative features. The students also considered that their evaluation process was correct, due to the fact that it was focused on the design of an integrated project for one of the school years already mentioned.
Resumo:
This work deals with the numerical simulation of air stripping process for the pre-treatment of groundwater used in human consumption. The model established in steady state presents an exponential solution that is used, together with the Tau Method, to get a spectral approach of the solution of the system of partial differential equations associated to the model in transient state.
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.
Resumo:
Solution enthalpies of 18-crown-6 have been obtained for a set of 14 protic and aprotic solvents at 298.15 K. The complementary use of Solomonov's methodology and a QSPR-based approach allowed the identification of the most significant solvent descriptors that model the interaction enthalpy contribution of the solution process (Delta H-int(A/S)). Results were compared with data previously obtained for 1,4-dioxane. Although the interaction enthalpies of 18-crown-6 correlate well with those of 1,4-dioxane, the magnitude of the most relevant parameters, pi* and beta, is almost three times higher for 18-crown-6. This is rationalized in terms of the impact of the solute's volume in the solution processes of both compounds. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The fractal geometry is used to model of a naturally fractured reservoir and the concept of fractional derivative is applied to the diffusion equation to incorporate the history of fluid flow in naturally fractured reservoirs. The resulting fractally fractional diffusion (FFD) equation is solved analytically in the Laplace space for three outer boundary conditions. The analytical solutions are used to analyze the response of a naturally fractured reservoir considering the anomalous behavior of oil production. Several synthetic examples are provided to illustrate the methodology proposed in this work and to explain the diffusion process in fractally fractured systems.
Resumo:
Comunicação apresentada no 8º Congresso Nacional de Administração Pública - Desafios e Soluções, em Carcavelos de 21 a 22 de Novembro de 2011.
Resumo:
Mestrado em Engenharia Civil – Ramo de Estruturas
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Resumo:
RESUMO - A presente investigação procura descrever e compreender como a estratégia influencia a liderança e como esta por sua vez interage nos processos de inovação e mudança, em organizações de saúde. Desconhecem-se estudos anteriores, em Portugal, sobre este problema de investigação e da respectiva problemática teórica. Trata-se de um estudo exploratório e descritivo que envolveu 5 organizações de saúde, 4 portuguesas e 1 espanhola, 4 hospitais (dois privados e uma unidade local de saúde). Utilizou-se uma abordagem mista de investigação (qualitativa e quantitativa), que permitiu compreender, através do estudo de caso, como se articulam a estratégia, a liderança e a inovação nessas cinco organizações de saúde. Os resultados do estudo empírico foram provenientes da recolha de dados efectuada através de observação directa e estruturada, entrevistas com actores-chave, documentos em suporte de papel e digital, e ainda inquérito por questionário de auto-resposta a uma amostra (n=165) de actores do line e do staff (Administradores, Directores de Serviço/Departamento, Enfermeiros Chefe e Técnicos Coordenadores) das cinco organizações de saúde. Tanto o modelo de Miles & Snow (estratégia organizacional), como o modelo dos valores contrastantes de Quinn (cultura organizacional e liderança), devidamente adaptados, mostram-se heurísticos e provam poder aplicar-se às organizações de saúde, apesar a sua complexidade e especificidade. Tanto as organizações do sector público como do sector privado e organizações públicas concessionadas (parcerias público privadas) podem ser acompanhadas e monitorizadas nos seus processos de inovação e mudança, associados aos tipos de cultura, liderança ou estratégia organizacionais adoptadas. As organizações de saúde coabitam num continuum, onde o ambiente (quer interno quer externo) e o tempo são factores decisivos que condicionam a estratégia a adoptar. Também aqui, em função da realidade dinâmica e complexa onde a organização se move, não há tipologias puras. Há, sim, uma grande plasticidade e flexibilidade organizacionais. Quanto aos líderes, exercem habitualmente a autoridade formal, pela via da circular normativa. Não são pares (nem primi inter pares), colocam-se por vezes numa posição de superioridade, quando o mais adequado seria a relação de parceria, cooperação e procura de consensos, com todos os colaboradores, afim de serem eles os verdadeiros protagonistas e facilitadores da mudança e das inovações. Como factores facilitadores da inovação e da mudança, encontrámos nas organizações de saúde estudadas o seguinte: facilidade de aprender; visão/missão adequadas; ausência de medo de falhar; e como factores inibidores: falta de articulação entre serviços/departamentos; estrutura organizacional (no sector público muito verticalizada e no sector privado mais horizontalizada); resistência à mudança; falta de tempo; falha no tempo de reacção (o tempo útil para a tomada de decisão é, por vezes, ultrapassado). --------ABSTRACT - The present research seeks to describe and understand how strategy influences leadership and how this in turn interacts in the process of innovation and change in health organizations. Previous studies on these topics are unknown in Portugal, about this research problem and its theoretical problem. This is an exploratory and descriptive study that involved 5 health organizations, 4 Portuguese and 1 Spanish. We used a mixed approach of research (qualitative and quantitative), which enabled us to understand, through case study, how strategy and leadership were articulated with innovation in these five health organizations. The results of the empirical study came from data collection through direct observation, interviews with key actors, documents and survey questionnaire answered by 165 participants of line and staff (Administrators, Medical Directors of Service /Department, Head Nurses and Technical Coordinators) of the five health organizations. Despite their complexity and specificity, both the model of Miles & Snow (organizational strategy) and the model of the Competing Values Framework of Quinn (organizational culture and leadership), suitably adapted, have proven heuristic power and able to be apply to healthcare organizations. Both public sector organizations, private and public organizations licensed (public-private partnerships) can be tracked and monitored in their processes of innovation and change in order to understand its kind of culture, leadership or organizational strategy adopted. Health organizations coexist in a continuum, where the environment (internal and external) and time are key factors which determine the strategy to adopt. Here too depending on the dynamic and complex reality where the organization moves, there are no pure types. There is indeed a great organizational plasticity and flexibility. Leaders usually carry the formal authority by circular normative. They are not pairs (or primi inter pares). Instead they are, sometimes, in a position of superiority, when the best thing is partnership, collaboration, cooperation, building consensus and cooperation with all stakeholders, in order that they are the real protagonists and facilitators of change and innovation. As factors that facilitate innovation and change, we found in health organizations studied, the following: ease of learning; vision / mission appropriate; absence of fear of failure, and as inhibiting factors: lack of coordination between agencies / departments; organizational structure (in the public sector it is too vertical and in the private sector it is more horizontal); resistance to change; lack of time and failure in the reaction time (the time for decision making is sometimes exceeded).
Resumo:
Order picking consists in retrieving products from storage locations to satisfy independent orders from multiple customers. It is generally recognized as one of the most significant activities in a warehouse (Koster et al, 2007). In fact, order picking accounts up to 50% (Frazelle, 2001) or even 80% (Van den Berg, 1999) of the total warehouse operating costs. The critical issue in today’s business environment is to simultaneously reduce the cost and increase the speed of order picking. In this paper, we address the order picking process in one of the Portuguese largest companies in the grocery business. This problem was proposed at the 92nd European Study Group with Industry (ESGI92). In this setting, each operator steers a trolley on the shop floor in order to select items for multiple customers. The objective is to improve their grocery e-commerce and bring it up to the level of the best international practices. In particular, the company wants to improve the routing tasks in order to decrease distances. For this purpose, a mathematical model for a faster open shop picking was developed. In this paper, we describe the problem, our proposed solution as well as some preliminary results and conclusions.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer Science