931 resultados para Module average case analysis
Resumo:
OBJECTIVE: To evaluate the Brazilian version of WHOQOL-OLD Module and to test potential changes to the instrument to increase its psychometric adequacy. METHODS: A total of 424 older adults living in a city in Southern Brazil completed the WHOQOL-OLD instrument, in 2005. Rasch analysis was used to explore the psychometric performance of the scale, as implemented by the RUMM2020 software. Item-trait interaction, threshold disorders, presence of differential item functioning and item fit, were analyzed. RESULTS: Two ("death and dying" and "sensory abilities") out of six domains showed inadequate item-trait interactions. Rescoring the response scale and deleting the most misperforming items led to scale improvement. The evaluation of domains and items individually showed that the "intimacy" domain does perform well in contrast to the findings using the classical approach. In addition, the "sensory abilities" domain does not derive an interval measure in its current format. CONCLUSIONS: Unidimensionality and local independence were seen in all domains. Changes in the response scale and deletion of problematic items improved the scale's performance.
Resumo:
"Many-core” systems based on the Network-on- Chip (NoC) architecture have brought into the fore-front various opportunities and challenges for the deployment of real-time systems. Such real-time systems need timing guarantees to be fulfilled. Therefore, calculating upper-bounds on the end-to-end communication delay between system components is of primary interest. In this work, we identify the limitations of an existing approach proposed by [1] and propose different techniques to overcome these limitations.
Resumo:
The clothing sector in Portugal is still seen, in many aspects as a traditional sector with some average characteristics, such as: low level of qualifications, less flexible labour legislation and stronger unionisation, very low salaries and low capability of investment in innovation and new technology. Is, nevertheless, a very important sector in terms of labour market, with increased weight in the exporting structure. Globalisation and delocalisation are having a strong impact in the organisation of work and in occupational careers in the sector. With the pressure of global competitiveness in what concerns time and prices, very few companies are able to keep a position in the market without changes in organisation of work and workers. And those that can perform good responses to such challenges are achieving a better economical stability. The companies have found different ways to face this reality according to size, capital and position. We could find two main paths: one where companies outsource a part or the entire production to another territory (for example, several manufacturing tasks), close and/or dismissal the workers. Other path, where companies up skilled their capacities investing, for example, in design, workers training, conception and introduction of new or original products. This paper will present some results from the European project WORKS – Work organisation and restructuring in the knowledge society (6th Framework Programme), focusing the Portuguese case studies in several clothing companies in what concern implications of global context for the companies in general and for the workers in particular, in a comparative analysis with some other European countries.
Resumo:
OBJECTIVE To analyze the effect of air pollution and temperature on mortality due to cardiovascular and respiratory diseases. METHODS We evaluated the isolated and synergistic effects of temperature and particulate matter with aerodynamic diameter < 10 µm (PM10) on the mortality of individuals > 40 years old due to cardiovascular disease and that of individuals > 60 years old due to respiratory diseases in Sao Paulo, SP, Southeastern Brazil, between 1998 and 2008. Three methodologies were used to evaluate the isolated association: time-series analysis using Poisson regression model, bidirectional case-crossover analysis matched by period, and case-crossover analysis matched by the confounding factor, i.e., average temperature or pollutant concentration. The graphical representation of the response surface, generated by the interaction term between these factors added to the Poisson regression model, was interpreted to evaluate the synergistic effect of the risk factors. RESULTS No differences were observed between the results of the case-crossover and time-series analyses. The percentage change in the relative risk of cardiovascular and respiratory mortality was 0.85% (0.45;1.25) and 1.60% (0.74;2.46), respectively, due to an increase of 10 μg/m3 in the PM10 concentration. The pattern of correlation of the temperature with cardiovascular mortality was U-shaped and that with respiratory mortality was J-shaped, indicating an increased relative risk at high temperatures. The values for the interaction term indicated a higher relative risk for cardiovascular and respiratory mortalities at low temperatures and high temperatures, respectively, when the pollution levels reached approximately 60 μg/m3. CONCLUSIONS The positive association standardized in the Poisson regression model for pollutant concentration is not confounded by temperature, and the effect of temperature is not confounded by the pollutant levels in the time-series analysis. The simultaneous exposure to different levels of environmental factors can create synergistic effects that are as disturbing as those caused by extreme concentrations.
Resumo:
OBJECTIVE To evaluate if temperature and humidity influenced the etiology of bloodstream infections in a hospital from 2005 to 2010.METHODS The study had a case-referent design. Individual cases of bloodstream infections caused by specific groups or pathogens were compared with several references. In the first analysis, average temperature and humidity values for the seven days preceding collection of blood cultures were compared with an overall “seven-days moving average” for the study period. The second analysis included only patients with bloodstream infections. Several logistic regression models were used to compare different pathogens and groups with respect to the immediate weather parameters, adjusting for demographics, time, and unit of admission.RESULTS Higher temperatures and humidity were related to the recovery of bacteria as a whole (versus fungi) and of gram-negative bacilli. In the multivariable models, temperature was positively associated with the recovery of gram-negative bacilli (OR = 1.14; 95%CI 1.10;1.19) or Acinetobacter baumannii (OR = 1.26; 95%CI 1.16;1.37), even after adjustment for demographic and admission data. An inverse association was identified for humidity.CONCLUSIONS The study documented the impact of temperature and humidity on the incidence and etiology of bloodstream infections. The results correspond with those from ecological studies, indicating a higher incidence of gram-negative bacilli during warm seasons. These findings should guide policies directed at preventing and controlling healthcare-associated infections.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Software tools in education became popular since the widespread of personal computers. Engineering courses lead the way in this development and these tools became almost a standard. Engineering graduates are familiar with numerical analysis tools but also with simulators (e.g. electronic circuits), computer assisted design tools and others, depending on the degree. One of the main problems with these tools is when and how to start use them so that they can be beneficial to students and not mere substitutes for potentially difficult calculations or design. In this paper a software tool to be used by first year students in electronics/electricity courses is presented. The growing acknowledgement and acceptance of open source software lead to the choice of an open source software tool – Scilab, which is a numerical analysis tool – to develop a toolbox. The toolbox was developed to be used as standalone or integrated in an e-learning platform. The e-learning platform used was Moodle. The first approach was to assess the mathematical skills necessary to solve all the problems related to electronics and electricity courses. Analysing the existing circuit simulators software tools, it is clear that even though they are very helpful by showing the end result they are not so effective in the process of the students studying and self learning since they show results but not intermediate steps which are crucial in problems that involve derivatives or integrals. Also, they are not very effective in obtaining graphical results that could be used to elaborate reports and for an overall better comprehension of the results. The developed tool was based on the numerical analysis software Scilab and is a toolbox that gives their users the opportunity to obtain the end results of a circuit analysis but also the expressions obtained when derivative and integrals calculations, plot signals, obtain vector diagrams, etc. The toolbox runs entirely in the Moodle web platform and provides the same results as the standalone application. The students can use the toolbox through the web platform (in computers where they don't have installation privileges) or in their personal computers by installing both the Scilab software and the toolbox. This approach was designed for first year students from all engineering degrees that have electronics/electricity courses in their curricula.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
O software tem vindo a tornar-se uma parte importante de qualquer empresa, cobrindo várias áreas funcionais, tais como manufaturação, vendas ou recursos humanos. O facto de uma empresa possuir um software capaz de ligar todas ou a maior parte das suas áreas funcionais e de acomodar as suas regras de negócio permite que estas tenham acesso a dados em tempo real nos quais se podem basear para tomar decisões. Estes tipos de software podem ser categorizados como Enterprise resource planning (ERP). Tendo em conta que estes tipos de software têm um papel importante dentro de uma empresa, a aquisição dos mesmos é algo que deve ser bem estudado. As grandes empresas normalmente optam pela aquisição de soluções comerciais uma vez que estas tendem a ter mais funcionalidades, maior suporte e certificações. Os ERPs comerciais representam, no entanto, um esforço elevado para que a sua compra possa ser feita, o que limita a possibilidade de aquisição dos mesmos por parte de pequenas ou médias empresas. No entanto, tal como acontece com a maior parte dos tipos de software, existem alternativas open-source. Se nos colocássemos na posição de uma pequena empresa, a tentar iniciar o seu negócio em Portugal, que tipo de ERP seria suficiente para os nossos requisitos? Teríamos que optar por comprar uma solução comercial, ou uma solução open-source seria suficiente? E se optássemos por desenvolver uma solução à medida? Esta tese irá responder a estas questões focando-se apenas num dos componentes base de qualquer ERP, a gestão de entidades. O componente de gestão de entidades é responsável por gerir todas as entidades com as quais a empresa interage abrangindo colaboradores, clientes, fornecedores, etc. A nível de funcionalidades será feita uma comparação entre um ERP comercial e um ERP open-source. Como os ERPs tendem a ser soluções muito genéricas é comum que estes não implementem todos os requisitos de um negócio em particular, como tal os ERPs precisam de ser extensíveis e adaptáveis. Para perceber até que ponto a solução open-source é extensível será feita uma análise técnica ao seu código fonte e será feita uma implementação parcial de um gerador de ficheiros de auditoria requerido pela lei Portuguesa, o SAF-T (PT). Ao estudar e adaptar a solução open-source podemos especificar o que teria que ser desenvolvido para podermos criar uma solução à medida de raiz.
Resumo:
SUMMARY Cestodes of the Bertiella genus are parasites of non-human primates found in Africa, Asia, Oceania and the Americas. Species Bertiella studeri and Bertiella mucronatacould, accidentally, infect human beings. The infection occurs from ingestion of mites from the Oribatida order containing cysticercoid larvae of the parasite. The objective of this report is to register the first case of human infection by Bertiella studeri in Brazil. Proglottids of the parasite, found in the stool sample of a two-and-a-half-year-old child, were fixed, stained and microscopically observed to evaluate its morphological characteristics. Eggs obtained from the proglottids were also studied. The gravid proglottids examined matched the description of the genus Bertiella. The eggs presented a round shape, with the average diameter of 43.7 µm, clearly showing the typical pyriform apparatus of B. studeri. The authors concluded that the child was infected with Bertiella studeri,based on Stunkard's (1940) description of the species. This is the fifth case of human Bertiellosis described in Brazil through morphometric analysis of the parasite, the third in Minas Gerais State and the first diagnosed case of Bertiella studeriin Brazil.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Project work presented as a partial requirement to obtain a Master Degree in Information Management