980 resultados para Source wavelet estimation
Resumo:
This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.
Resumo:
Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
In order to know the importance of chicken as natural reservoir of Campylobacter lari in Iquitos, Peru; samples were obtained by cloacal swabs from 200 chickens and immediately placed into a semisolid enrichment medium; these were streaked on modified Skirrow Agar. The organism was isolated from 21 (10.5%) samples, corresponding 58.8% to biovar I and 41.2% to biovar II (Lior scheme). The results provide evidence that chicken appear to be prominent reservoirs of Campylobacter lari in Iquitos.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
Apart from cryptococcosis and histoplasmosis, which are mycoses contained by T cell-mediated mechanisms of host defense, fungemia is rarely found in AIDS patients. The frequency of fungemia due to Candida spp. has been reported to be as low as 1 %. We report a non-neutropenic AIDS patient who presented a candidemia which probably arose from her gastrointestinal tract.
Resumo:
RESUMO - A exposição a formaldeído é reconhecidamente um dos mais importantes factores de risco presente nos laboratórios hospitalares de anatomia patológica. Neste contexto ocupacional, o formaldeído é utilizado em solução, designada comummente por formol. Trata-se de uma solução comercial de formaldeído, normalmente diluída a 10%, sendo pouco onerosa e, por esse motivo, a eleita para os trabalhos de rotina em anatomia patológica. A solução é utilizada como fixador e conservante do material biológico, pelo que as peças anatómicas a serem processadas são previamente impregnadas. No que concerne aos efeitos para a saúde do formaldeído, os efeitos locais parecem apresentar um papel mais importante comparativamente com os efeitos sistémicos, devido à sua reactividade e rápido metabolismo nas células da pele, tracto gastrointestinal e pulmões. Da mesma forma, a localização das lesões correspondem principalmente às zonas expostas às doses mais elevadas deste agente químico, ou seja, o desenvolvimento dos efeitos tóxicos dependerá mais da intensidade da dose externa do que da duração da exposição. O efeito do formaldeído no organismo humano mais facilmente detectável é a acção irritante, transitória e reversível sobre as mucosas dos olhos e aparelho respiratório superior (naso e orofaringe), o que acontece em geral para exposições frequentes e superiores a 1 ppm. Doses elevadas são citotóxicas e podem conduzir a degenerescência e necrose das mucosas e epitélios. No que concerne aos efeitos cancerígenos, a primeira avaliação efectuada pela International Agency for Research on Cancer data de 1981, actualizada em 1982, 1987, 1995 e 2004, considerando-o como um agente cancerígeno do grupo 2A (provavelmente carcinogénico). No entanto, a mais recente avaliação, em 2006, considera o formaldeído no Grupo 1 (agente carcinogénico) com base na evidência de que a exposição a este agente é susceptível de causar cancro nasofaríngeo em humanos. Constituiu objectivo principal deste estudo caracterizar a exposição profissional a formaldeído nos laboratórios hospitalares de anatomia patológica Portugueses. Pretendeu-se, ainda, descrever os fenómenos ambientais da contaminação ambiental por formaldeído e explorar eventuais associações entre variáveis. Considerou-se uma amostra de 10 laboratórios hospitalares de anatomia patológica, avaliada a exposição dos três grupos profissionais por comparação com os dois referenciais de exposição e, ainda, conhecidos os valores de concentração máxima em 83 actividades. Foram aplicados simultaneamente dois métodos distintos de avaliação ambiental: um dos métodos (Método 1) fez uso de um equipamento de leitura directa com o princípio de medição por Photo Ionization Detection, com uma lâmpada de 11,7 eV e, simultaneamente, realizou-se o registo da actividade. Este método disponibilizou dados para o referencial de exposição da concentração máxima; o outro método (Método 2) traduziu-se na aplicação do método NIOSH 2541, implicando o uso de bombas de amostragem eléctricas de baixo caudal e posterior processamento analítico das amostras por cromatografia gasosa. Este método, por sua vez, facultou dados para o referencial de exposição da concentração média ponderada. As estratégias de medição de cada um dos métodos e a definição dos grupos de exposição existentes neste contexto ocupacional, designadamente os Técnicos de Anatomia Patológica, os Médicos Anatomo-Patologistas e os Auxiliares, foram possíveis através da informação disponibilizada pelas técnicas de observação da actividade da análise (ergonómica) do trabalho. Estudaram-se diversas variáveis independentes, nomeadamente a temperatura ambiente e a humidade relativa, a solução de formaldeído utilizada, as condições de ventilação existentes e o número médio de peças processadas por dia em cada laboratório. Para a recolha de informação sobre estas variáveis foi preenchida, durante a permanência nos laboratórios estudados, uma Grelha de Observação e Registo. Como variáveis dependentes seleccionaram-se três indicadores de contaminação ambiental, designadamente o valor médio das concentrações superiores a 0,3 ppm em cada laboratório, a Concentração Média Ponderada obtida para cada grupo de exposição e o Índice do Tempo de Regeneração de cada laboratório. Os indicadores foram calculados e definidos através dos dados obtidos pelos dois métodos de avaliação ambiental aplicados. Baseada no delineado pela Universidade de Queensland, foi ainda aplicada uma metodologia de avaliação do risco de cancro nasofaríngeo nas 83 actividades estudadas de modo a definir níveis semi-quantitativos de estimação do risco. Para o nível de Gravidade considerou-se a informação disponível em literatura científica que define eventos biológicos adversos, relacionados com o modo de acção do agente químico e os associa com concentrações ambientais de formaldeído. Para o nível da Probabilidade utilizou-se a informação disponibilizada pela análise (ergonómica) de trabalho que permitiu conhecer a frequência de realização de cada uma das actividades estudadas. A aplicação simultânea dos dois métodos de avaliação ambiental resultou na obtenção de resultados distintos, mas não contraditórios, no que concerne à avaliação da exposição profissional a formaldeído. Para as actividades estudadas (n=83) verificou-se que cerca de 93% dos valores são superiores ao valor limite de exposição definido para a concentração máxima (VLE-CM=0,3 ppm). O “exame macroscópico” foi a actividade mais estudada e onde se verificou a maior prevalência de resultados superiores ao valor limite (92,8%). O valor médio mais elevado da concentração máxima (2,04 ppm) verificou-se no grupo de exposição dos Técnicos de Anatomia Patológica. No entanto, a maior amplitude de resultados observou-se no grupo dos Médicos Anatomo-Patologistas (0,21 ppm a 5,02 ppm). No que respeita ao referencial da Concentração Média Ponderada, todos os valores obtidos nos 10 laboratórios estudados para os três grupos de exposição foram inferiores ao valor limite de exposição definido pela Occupational Safety and Health Administration (TLV-TWA=0,75 ppm). Verificou-se associação estatisticamente significativa entre o número médio de peças processadas por laboratório e dois dos três indicadores de contaminação ambiental utilizados, designadamente o valor médio das concentrações superiores a 0,3 ppm (p=0,009) e o Índice do Tempo de Regeneração (p=0,001). Relativamente à temperatura ambiente não se observou associação estatisticamente significativa com nenhum dos indicadores de contaminação ambiental utilizados. A humidade relativa apresentou uma associação estatisticamente significativa apenas com o indicador de contaminação ambiental da Concentração Média Ponderada de dois grupos de exposição, nomeadamente com os Médicos Anatomo-Patologistas (p=0,02) e os Técnicos de Anatomia Patológica (p=0,04). A aplicação da metodologia de avaliação do risco nas 83 actividades estudadas permitiu verificar que, em cerca de dois terços (35%), o risco foi classificado como (pelo menos) elevado e, ainda, constatar que 70% dos laboratórios apresentou pelo menos 1 actividade com a classificação de risco elevado. Da aplicação dos dois métodos de avaliação ambiental e das informações obtidas para os dois referenciais de exposição pode concluir-se que o referencial mais adequado é a Concentração Máxima por estar associado ao modo de actuação do agente químico. Acresce, ainda, que um método de avaliação ambiental, como o Método 1, que permite o estudo das concentrações de formaldeído e simultaneamente a realização do registo da actividade, disponibiliza informações pertinentes para a intervenção preventiva da exposição por permitir identificar as actividades com a exposição mais elevada, bem como as variáveis que a condicionam. As peças anatómicas apresentaram-se como a principal fonte de contaminação ambiental por formaldeído neste contexto ocupacional. Aspecto de particular interesse, na medida que a actividade desenvolvida neste contexto ocupacional e, em particular na sala de entradas, é centrada no processamento das peças anatómicas. Dado não se perspectivar a curto prazo a eliminação do formaldeído, devido ao grande número de actividades que envolvem ainda a utilização da sua solução comercial (formol), pode concluir-se que a exposição a este agente neste contexto ocupacional específico é preocupante, carecendo de uma intervenção rápida com o objectivo de minimizar a exposição e prevenir os potenciais efeitos para a saúde dos trabalhadores expostos. ---------------- ABSTRACT - Exposure to formaldehyde is recognized as one of the most important risk factors present in anatomy and pathology laboratories from hospital settings. In this occupational setting, formaldehyde is used in solution, typically diluted to 10%, and is an inexpensive product. Because of that, is used in routine work in anatomy and pathology laboratories. The solution is applied as a fixative and preservative of biological material. Regarding formaldehyde health effects, local effects appear to have a more important role compared with systemic effects, due to his reactivity and rapid metabolism in skin, gastrointestinal tract and lungs cells. Likewise, lesions location correspond mainly to areas exposed to higher doses and toxic effects development depend more on external dose intensity than exposure duration. Human body formaldehyde effect more easily detectable is the irritating action, transient and reversible on eyes and upper respiratory tract (nasal and throat) membranes, which happen in general for frequent exposure to concentrations higher than 1 ppm. High doses are cytotoxic and can lead to degeneration, and also to mucous membranes and epithelia necrosis. With regard to carcinogenic effects, first assessment performed by International Agency for Research on Cancer in 1981, updated in 1982, 1987, 1995 and 2004, classified formaldehyde in Group 2A (probably carcinogenic). However, most recent evaluation in 2006, classifies formaldehyde carcinogenic (Group 1), based on evidence that exposure to this agent is likely to cause nasopharyngeal cancer in humans. This study principal objective was to characterize occupational exposure to formaldehyde in anatomy and pathology hospital laboratories, as well to describe formaldehyde environmental contamination phenomena and explore possible associations between variables. It was considered a sample of 10 hospital pathology laboratories, assessed exposure of three professional groups for comparison with two exposure metrics, and also knows ceiling concentrations in 83 activities. Were applied, simultaneously, two different environmental assessment methods: one method (Method 1) using direct reading equipment that perform measure by Photo Ionization Detection, with 11,7 eV lamps and, simultaneously, make activity description and film. This method provided data for ceiling concentrations for each activity study (TLV-C). In the other applied method (Method 2), air sampling and formaldehyde analysis were performed according to NIOSH method (2541). This method provided data average exposure concentration (TLV-TWA). Measuring and sampling strategies of each methods and exposure groups definition (Technicians, Pathologists and Assistants) was possible by information provided by activities (ergonomic) analysis. Several independent variables were studied, including temperature and relative humidity, formaldehyde solution used, ventilation conditions, and also anatomic pieces mean value processed per day in each laboratory. To register information about these variables was completed an Observation and Registration Grid. Three environmental contamination indicators were selected has dependent variables namely: mean value from concentrations exceeding 0,3 ppm in each laboratory, weighted average concentration obtained for each exposure group, as well each laboratory Time Regeneration Index. These indicators were calculated and determined through data obtained by the two environmental assessment methods. Based on Queensland University proposal, was also applied a methodology for assessing nasopharyngeal cancer risk in 83 activities studied in order to obtain risk levels (semi-quantitative estimation). For Severity level was considered available information in scientific literature that defines biological adverse events related to the chemical agent action mode, and associated with environment formaldehyde concentrations. For Probability level was used information provided by (ergonomic) work analysis that helped identifies activity frequency. Environmental assessment methods provide different results, but not contradictory, regarding formaldehyde occupational exposure evaluation. In the studied activities (n=83), about 93% of the values were above exposure limit value set for ceiling concentration in Portugal (VLE-CM = 0,3 ppm). "Macroscopic exam" was the most studied activity, and obtained the higher prevalence of results superior than 0,3 ppm (92,8%). The highest ceiling concentration mean value (2,04 ppm) was obtain in Technicians exposure group, but a result wider range was observed in Pathologists group (0,21 ppm to 5,02 ppm). Concerning Method 2, results from the three exposure groups, were all lower than limit value set by Occupational Safety and Health Administration (TLV-TWA=0,75ppm). There was a statistically significant association between anatomic pieces mean value processed by each laboratory per day, and two of the three environmental contamination indicators used, namely average concentrations exceeding 0,3 ppm (p=0,009) and Time Regeneration Index (p=0,001). Temperature was not statistically associated with any environmental contamination used indicators. Relative humidity had a statistically significant association only with one environmental contamination indicator, namely weighted average concentration, particularly with Pathologists group (p=0,02) and Technicians group (p=0,04). Risk assessment performed in the 83 studied activities showed that around two thirds (35%) were classified as (at least) high, and also noted that 70% of laboratories had at least 1 activity with high risk rating. The two environmental assessment methods application, as well information obtained from two exposure metrics, allowed to conclude that most appropriate exposure metric is ceiling concentration, because is associated with formaldehyde action mode. Moreover, an environmental method, like Method 1, which allows study formaldehyde concentrations and relates them with activity, provides relevant information for preventive information, since identifies the activity with higher exposure, as well variables that promote exposure. Anatomic pieces represent formaldehyde contamination main source in this occupational setting, and this is of particular interest because all activities are focused on anatomic pieces processing. Since there is no prospect, in short term, for formaldehyde use elimination due to large number of activities that still involve solution use, it can be concluded that exposure to this agent, in this particular occupational setting, is preoccupant, requiring an rapid intervention in order to minimize exposure and prevent potential health effects in exposed workers.
Resumo:
We present a case of prenatal diagnosis of congenital rubella. After birth, in addition to traditional serologic and clinical examinations to confirm the infection, we could identify the virus in the "first fluid aspirated from the oropharynx of the newborn", using polimerase chain reaction (PCR). We propose that this first oropharynx fluid (collected routinely immediately after birth) could be used as a source for identification of various congenital infection agents, which may not always be easily identified by current methods
Resumo:
Presented at Faculdade de Ciências e Tecnologias, Universidade de Lisboa, to obtain the Master Degree in Conservation and Restoration of Textiles
Resumo:
O modelo matemático de um sistema real permite o conhecimento do seu comportamento dinâmico e é geralmente utilizado em problemas de engenharia. Por vezes os parâmetros utilizados pelo modelo são desconhecidos ou imprecisos. O envelhecimento e o desgaste do material são fatores a ter em conta pois podem causar alterações no comportamento do sistema real, podendo ser necessário efetuar uma nova estimação dos seus parâmetros. Para resolver este problema é utilizado o software desenvolvido pela empresa MathWorks, nomeadamente, o Matlab e o Simulink, em conjunto com a plataforma Arduíno cujo Hardware é open-source. A partir de dados obtidos do sistema real será aplicado um Ajuste de curvas (Curve Fitting) pelo Método dos Mínimos Quadrados de forma a aproximar o modelo simulado ao modelo do sistema real. O sistema desenvolvido permite a obtenção de novos valores dos parâmetros, de uma forma simples e eficaz, com vista a uma melhor aproximação do sistema real em estudo. A solução encontrada é validada com recurso a diferentes sinais de entrada aplicados ao sistema e os seus resultados comparados com os resultados do novo modelo obtido. O desempenho da solução encontrada é avaliado através do método das somas quadráticas dos erros entre resultados obtidos através de simulação e resultados obtidos experimentalmente do sistema real.
Resumo:
Dissertação para obtenção do Grau de Mestre em Matemática e Aplicações Especialização em Actuariado, Estatística e Investigação Operacional
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Doutor em Gestão de Informação