877 resultados para Acceleration data structure
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
J Biol Inorg Chem (2006) 11: 548–558 DOI 10.1007/s00775-006-0104-y
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
RESUMO: As doenças mentais são comuns, universais e associadas a uma significativa sobrecarga pessoal, familiar, social e económica. Os Serviços de Saúde Mental devem abordar de forma adequada as necessidades dos pacientes e familiares tanto ao nível clínico como também ao nível social. O presente estudo foi realizado num período de grande transformação nos sistemas de saúde primário e de saúde mental em Portugal, num Departamento de Psiquiatria desenvolvido com base nos princípios da OMS. Os objectivos incluem a caracterização: 1) das Unidades Funcionais do Departamento; 2) dos pacientes internados pela primeira vez no internamento de agudos; 3) da utilização dos serviços nas equipas comunitárias após a alta; e 4) da avaliação de alguns dos indicadores de qualidade do departamento, com recurso ao modelo de Donabedian sobre a articulação entre a Estrutura-Processo-Resultados. Metodologia: Foi escolhido um estudo de coorte retrospectivo. Todos os pacientes internados pela primeira vez entre 2008 e 2010 foram incluídos no estudo. Os seus processos clínicos e a base de dados do hospital onde são registados todos os contactos que estes tiveram com os profissionais de saúde mental foram revistos de forma a obter dados sociodemográficos e clínicos, durante o período do estudo e após a alta. Os instrumentos utilizados foram o WHO-ICMHC (Classificação Internacional de Cuidados de Saúde Mental), para caracterizar o Departamento, o AIESMP (Avaliação Inicial de Enfermagem em Saúde Mental e Psiquiatria) para recolha dos dados sociodemográficos, e o VSSS (Escala de Satisfação com os Serviços de Verona) de forma a avaliar a satisfação dos pacientes em relação aos cuidados recebidos. A análise estatística incluiu a análise descritiva, quantitativa e qualitativa dos dados. Resultados: As Unidades Funcionais do Departamento revelaram níveis elevados de articulação e consistência com as necessidades de cuidados psiquiátricos e reabilitação psicossocial dos pacientes. Os 543 pacientes admitidos pela primeira vez eram maioritariamente (56.9%) mulheres, caucasianas (81.2%), com diagnóstico de perturbações do humor (66.3%), internadas voluntariamente (59.7%), e uma idade média de 45.1 anos. Estas eram significativamente mais velhas, mais frequentemente empregadas, casadas/coabitar e tinham uma prevalência mais elevada de perturbações do humor, comparativamente aos homens. O internamento compulsivo era mais significativo nos homens (54.7%). A taxa de abandono no pós-alta (4.2%) e a taxa de reinternamentos (2.9%) na quinzena após a alta revelaram-se inferiores aos padrões na literatura internacional. De forma global, a satisfação dos pacientes com os cuidados de saúde mental foi positiva. Conclusões: Os cuidados prestados mostraram-se eficazes, adaptados e baseados nas necessidades e problemas específicos dos pacientes. A continuidade e a abrangência de cuidados foram difundidos e mantidos ao longo do processo de cuidados. Este Departamento pode ser considerado um exemplo de como proporcionar tratamento digno e eficiente, e uma referência para futuros serviços de psiquiatria.-------------- ABSTRACT: Mental health disorders are common, universal, and associated with heavy personal, family, social and economic burden. Mental health services should be aimed at adequately addressing patients’ and families’ needs at clinical and social level. The current study was carried out at a time of great transformation in the health and mental health systems in Portugal, in a Psychiatric Department developed taking in consideration the WHO principles. The objectives included characterizing: 1) the Psychiatric Department’s different units; 2) the patients admitted for the first time to the inpatient unit; 3) their use of community mental health services after discharge; and 4) assessing some of the department’s quality indicators, with resource to Donabedian’s Structure-Process-Outcome model. Methodology: A retrospective cohort design was chosen. All the firstly admitted patients in the period between 2008 and 2010 were included in the study. Their clinical records and the hospital’s database which registers all of the contacts the patients had with the mental health professionals during the study period, were reviewed to retrieve sociodemographic and clinical data and information on follow-up. The instruments used were the WHO International Classification of Mental Health Care (ICMHC) to characterize the department, the Initial Nurses’ Assessment in Mental Health and Psychiatry (AIESMP) for patients’ sociodemographic data, and the Verona Service Satisfaction Scale (VSSS) to assess patients’ satisfaction with care received. Statistical analysis included descriptive, quantitative and qualitative analysis of the data. Results: The Department’s Functional units revealed high levels of articulation, and were consistent with patients’ needs for psychiatric care and psychosocial rehabilitation. The 543 patients firstly admitted were mainly (56.9%) female, Caucasian (81.2%), diagnosed with mood disorders (66.3%), voluntarily admitted (59.7%), and with a mean age of 45.1 years. Female patients were significantly older, more frequently employed, married/cohabiting and had a higher prevalence of mood disorders when compared to males. Involuntary admission was more significant in males (54.7%). Dropout rates during follow-up (4.2%) and readmission rates (2.9%) in the fortnight following discharge were lower than standards in international literature. Overall patients’ satisfaction with mental health care was positive. Conclusions: The care delivered was effective, adapted and based on the patients’ specific needs and problems. Continuity and comprehensiveness of care was endorsed and maintained throughout the care process. This department may be considered an example of both humane and effective treatment, and a reference for future psychiatric care.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Master’s Double Degree in Finance from Maastricht University and NOVA – School of Business and Economics
Resumo:
Within the civil engineering field, the use of the Finite Element Method has acquired a significant importance, since numerical simulations have been employed in a broad field, which encloses the design, analysis and prediction of the structural behaviour of constructions and infrastructures. Nevertheless, these mathematical simulations can only be useful if all the mechanical properties of the materials, boundary conditions and damages are properly modelled. Therefore, it is required not only experimental data (static and/or dynamic tests) to provide references parameters, but also robust calibration methods able to model damage or other special structural conditions. The present paper addresses the model calibration of a footbridge bridge tested with static loads and ambient vibrations. Damage assessment was also carried out based on a hybrid numerical procedure, which combines discrete damage functions with sets of piecewise linear damage functions. Results from the model calibration shows that the model reproduces with good accuracy the experimental behaviour of the bridge.
Resumo:
During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.
Resumo:
Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos
Resumo:
The study of the interaction between hair filaments and formulations or peptides is of utmost importance in fields like cosmetic research. Keratin intermediate filaments structure is not fully described, limiting the molecular dynamics (MD) studies in this field although its high potential to improve the area. We developed a computational model of a truncated protofibril, simulated its behavior in alcoholic based formulations and with one peptide. The simulations showed a strong interaction between the benzyl alcohol molecules of the formulations and the model, leading to the disorganization of the keratin chains, which regress with the removal of the alcohol molecules. This behavior can explain the increase of peptide uptake in hair shafts evidenced in fluorescence microscopy pictures. The model developed is valid to computationally reproduce the interaction between hair and alcoholic formulations and provide a robust base for new MD studies about hair properties. It is shown that the MD simulations can improve hair cosmetic research, improving the uptake of a compound of interest.
Resumo:
Tantalum oxynitride thin films were produced by magnetron sputtering. The films were deposited usinga pure Ta target and a working atmosphere with a constant N2/O2ratio. The choice of this constant ratiolimits the study concerning the influence of each reactive gas, but allows a deeper understanding of theaspects related to the affinity of Ta to the non-metallic elements and it is economically advantageous.This work begins by analysing the data obtained directly from the film deposition stage, followed bythe analysis of the morphology, composition and structure. For a better understanding regarding theinfluence of the deposition parameters, the analyses are presented by using the following criterion: thefilms were divided into two sets, one of them produced with grounded substrate holder and the otherwith a polarization of −50 V. Each one of these sets was produced with different partial pressure of thereactive gases P(N2+ O2). All the films exhibited a O/N ratio higher than the N/O ratio in the depositionchamber atmosphere. In the case of the films produced with grounded substrate holder, a strong increaseof the O content is observed, associated to the strong decrease of the N content, when P(N2+ O2) is higherthan 0.13 Pa. The higher Ta affinity for O strongly influences the structural evolution of the films. Grazingincidence X-ray diffraction showed that the lower partial pressure films were crystalline, while X-rayreflectivity studies found out that the density of the films depended on the deposition conditions: thehigher the gas pressure, the lower the density. Firstly, a dominant -Ta structure is observed, for lowP(N2+ O2); secondly a fcc-Ta(N,O) structure, for intermediate P(N2+ O2); thirdly, the films are amorphousfor the highest partial pressures. The comparison of the characteristics of both sets of produced TaNxOyfilms are explained, with detail, in the text.
Resumo:
We investigate the strain hardening behavior of various gelatin networks-namely physical gelatin gel, chemically cross-linked gelatin gel, and a hybrid gel made of a combination of the former two-under large shear deformations using the pre-stress, strain ramp, and large amplitude oscillations shear protocols. Further, the internal structures of physical gelatin gels and chemically cross-linked gelatin gels were characterized by small angle neutron scattering (SANS) to enable their internal structures to be correlated with their nonlinear rheology. The Kratky plots of SANS data demonstrate the presence of small cross-linked aggregates within the chemically cross-linked network whereas, in the physical gelatin gels, a relatively homogeneous structure is observed. Through model fitting to the scattering data, we were able to obtain structural parameters, such as the correlation length (ξ), the cross-sectional polymer chain radius (Rc) and the fractal dimension (df) of the gel networks. The fractal dimension df obtained from the SANS data of the physical and chemically cross-linked gels is 1.31 and 1.53, respectively. These values are in excellent agreement with the ones obtained from a generalized nonlinear elastic theory that has been used to fit the stress-strain curves. The chemical cross-linking that generates coils and aggregates hinders the free stretching of the triple helix bundles in the physical gels.
Resumo:
Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the "best fit" model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate.
Resumo:
The aim of this study is to analyze and relate the spatial-temporal variability of macrozoobenthic assemblages to bottom characteristics and salinity fluctuations, in an estuarine shallow water region of Patos Lagoon. Monthly samples, between September 2002 and August 2003, were taken on six sampling stations (distant 90 m). Three biological samples with a 10 cm diameter corer, one sample for sediment analysis, fortnightly bottom topography measurements, and daily data of temperature and salinity were taken from each station. Two biotic and environmental conditions were identified: the first corresponding to spring and summer months, with low macrozoobenthos densities, low values of salinity, small variations in bottom topographic level and weak hydrodynamic activity. A second situation occurred in the months of fall and winter, which showed increased salinity, hydrodynamics and macrobenthos organisms. These results which contrast with previous studies carried out in the area, were attributed to failure in macrozoobenthos recruitments during summer period, especially of the bivalve Erodona mactroides Bosc, 1802 and the tanaid Kalliapseuses schubartii Mañe-Garzón, 1949. This results showed that recruitments of dominant species were influenced by salinity and hydrodynamic conditions.