44 resultados para R-Statistical computing
Resumo:
Introdução – A ressonância magnética funcional (RMf) é hoje uma ferramenta fundamental na investigação funcional do cérebro humano, quer em indivíduos saudáveis quer em doentes com patologias diversas. É uma técnica complexa que necessita de uma aplicação cuidada e rigorosa e uma compreensão dos mecanismos biofísicos, de modo a serem obtidos resultados fiáveis e com melhor aceitação clínica. O efeito BOLD (Blood Oxygenation Level Dependent), que se baseia nas propriedades magnéticas da hemoglobina, é o método mais utilizado para medir a atividade cerebral por RMf. Objetivos – Otimizar um protocolo de RMf por efeito BOLD em voluntários saudáveis para mapeamento do córtex motor, de modo a que possa ser aplicado no futuro em doentes com patologias diversas. Metodologia – Foram estudados 34 voluntários saudáveis divididos em 2 grupos de estudo: BOLD 1 e BOLD 2. Com vista à otimização, foram testados no subgrupo BOLD 1 diferentes paradigmas e no subgrupo BOLD 2 foi estudada a influência do tempo de eco (TE). Para as várias condições foram comparados os volumes da região ativada e os níveis de ativação obtidos. Resultados/Discussão – O córtex motor foi identificado em todos os voluntários estudados. Não foram detetadas diferenças estatisticamente significativas quando comparados os resultados obtidos com os diferentes parâmetros de aquisição. Conclusão – O protocolo foi otimizado tendo em conta o nível de conforto reportado pelos voluntários. Uma vez que se pretende aplicar este mesmo protocolo no estudo de doentes, este fator torna-se particularmente relevante.
Resumo:
Intensity Modulated Radiotherapy (IMRT) is a technique introduced to shape more precisely the dose distributions to the tumour, providing a higher dose escalation in the volume to irradiate and simultaneously decreasing the dose in the organs at risk which consequently reduces the treatment toxicity. This technique is widely used in prostate and head and neck (H&N) tumours. Given the complexity and the use of high doses in this technique it’s necessary to ensure as a safe and secure administration of the treatment, through the use of quality control programmes for IMRT. The purpose of this study was to evaluate statistically the quality control measurements that are made for the IMRT plans in prostate and H&N patients, before the beginning of the treatment, analysing their variations, the percentage of rejected and repeated measurements, the average, standard deviations and the proportion relations.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
Introdução – O melanoma maligno cutâneo (MMC) é considerado uma das mais letais neoplasias e no seu seguimento recorre-se, para além dos exames clínicos e da análise de marcadores tumorais, a diversos métodos imagiológicos, como é o exame Tomografia por Emissão de Positrões/Tomografia Computorizada (PET/CT, do acrónimo inglês Positron Emission Tomography/Computed Tomography) com 18fluor-fluorodeoxiglucose (18F-FDG). O presente estudo tem como objetivo avaliar a utilidade da PET/CT relativamente à análise da extensão e à suspeita de recidiva do MMC, comparando os achados imagiológicos com os descritos em estudos CT. Metodologia – Estudo retrospetivo de 62 estudos PET/CT realizados em 50 pacientes diagnosticados com MMC. Excluiu-se um estudo cujo resultado era duvidoso (nódulo pulmonar). As informações relativas aos resultados dos estudos anatomopatológicos e dos exames imagiológicos foram obtidas através da história clínica e dos relatórios médicos dos estudos CT e PET/CT. Foi criada uma base de dados com os dados recolhidos através do software Excel e foi efetuada uma análise estatística descritiva. Resultados – Dos estudos PET/CT analisados, 31 foram considerados verdadeiros positivos (VP), 28 verdadeiros negativos (VN), um falso positivo (FP) e um falso negativo (FN). A sensibilidade, especificidade, o valor preditivo positivo (VPP), o valor preditivo negativo (VPN) e a exatidão da PET/CT para o estadiamento e avaliação de suspeita de recidiva no MMC são, respetivamente, 96,9%, 96,6%, 96,9%, 96,6% e 96,7%. Dos resultados da CT considerados na análise estatística, 14 corresponderam a VP, 12 a VN, três a FP e cinco a FN. A sensibilidade, especificidade, o VPP e o VPN e a exatidão da CT para o estadiamento e avaliação de suspeita de recidiva no MMC são, respetivamente, 73,7%, 80,0%, 82,4%, 70,6% e 76,5%. Comparativamente aos resultados CT, a PET/CT permitiu uma mudança na atitude terapêutica em 23% dos estudos. Conclusão – A PET/CT é um exame útil na avaliação do MMC, caracterizando-se por uma maior acuidade diagnóstica no estadiamento e na avaliação de suspeita de recidiva do MMC comparativamente à CT isoladamente.
Resumo:
Although the computational power of mobile devices has been increasing, it is still not enough for some classes of applications. In the present, these applications delegate the computing power burden on servers located on the Internet. This model assumes an always-on Internet connectivity and implies a non-negligible latency. The thesis addresses the challenges and contributions posed to the application of a mobile collaborative computing environment concept to wireless networks. The goal is to define a reference architecture for high performance mobile applications. Current work is focused on efficient data dissemination on a highly transitive environment, suitable to many mobile applications and also to the reputation and incentive system available on this mobile collaborative computing environment. For this we are improving our already published reputation/incentive algorithm with knowledge from the usage pattern from the eduroam wireless network in the Lisbon area.
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
Treatment of a dichloromethane solution of trans-[Mo(NCN){NCNC(O)R}(dppe)(2)]Cl [R = Me (1a), Et (1b)] (dppe = Ph2PCH2CH2PPh2) with HBF4, [Et3O][BF4] or EtC(O)Cl gives trans-[Mo(NCN)Cl-(dppe)(2)]X [X = BF4 (2a) or Cl (2b)] and the corresponding acylcyanamides NCN(R')C(O)Et (R' = H, Et or C(O)Et). X-ray diffraction analysis of 2a (X = BF4) reveals a multiple-bond coordination of the cyanoimide ligand. Compounds 1 convert to the bis(cyanoimide) trans-[Mo(NCN)(2)(dppe)(2)] complex upon reaction with an excess of NaOMe (with formation of the respective ester). In an aprotic medium and at a Pt electrode, compounds 1 (R = Me, Et or Ph) undergo a cathodically induced isomerization. Full quantitative kinetic analysis of the voltammetric behaviour is presented and allows the determination of the first-order rate constants and the equilibrium constant of the trans to cis isomerization reaction. The mechanisms of electrophilic addition (protonation) to complexes 1 and the precursor trans[Mo(NCN)(2)(dppe)(2)], as well as the electronic structures, nature of the coordination bonds and electrochemical behaviour of these species are investigated in detail by theoretical methods which indicate that the most probable sites of the proton attack are the oxygen atom of the acyl group and the terminal nitrogen atom, respectively.
Resumo:
Density-dependent effects, both positive or negative, can have an important impact on the population dynamics of species by modifying their population per-capita growth rates. An important type of such density-dependent factors is given by the so-called Allee effects, widely studied in theoretical and field population biology. In this study, we analyze two discrete single population models with overcompensating density-dependence and Allee effects due to predator saturation and mating limitation using symbolic dynamics theory. We focus on the scenarios of persistence and bistability, in which the species dynamics can be chaotic. For the chaotic regimes, we compute the topological entropy as well as the Lyapunov exponent under ecological key parameters and different initial conditions. We also provide co-dimension two bifurcation diagrams for both systems computing the periods of the orbits, also characterizing the period-ordering routes toward the boundary crisis responsible for species extinction via transient chaos. Our results show that the topological entropy increases as we approach to the parametric regions involving transient chaos, being maximum when the full shift R(L)(infinity) occurs, and the system enters into the essential extinction regime. Finally, we characterize analytically, using a complex variable approach, and numerically the inverse square-root scaling law arising in the vicinity of a saddle-node bifurcation responsible for the extinction scenario in the two studied models. The results are discussed in the context of species fragility under differential Allee effects. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Portugal joined the effort to create the EPOS infrastructure in 2008, and it became immediately apparent that a national network of Earth Sciences infrastructures was required to participate in the initiative. At that time, FCT was promoting the creation of a national infrastructure called RNG - Rede Nacional de Geofísica (National Geophysics Network). A memorandum of understanding had been agreed upon, and it seemed therefore straightforward to use RNG (enlarged to include relevant participants that were not RNG members) as the Portuguese partner to EPOS-PP. However, at the time of signature of the EPOS-PP contract with the European Commission (November 2010), RNG had not gained formal identity yet, and IST (one of the participants) signed the grant agreement on behalf of the Portuguese consortium. During 2011 no progress was made towards the formal creation of RNG, and the composition of the network – based on proposals submitted to a call issued in 2002 – had by then become obsolete. On February 2012, the EPOS national contact point was mandated by the representatives of the participating national infrastructures to request from FCT the recognition of a new consortium - C3G, Collaboratory for Geology, Geodesy and Geophysics - as the Portuguese partner to EPOS-PP. This request was supported by formal letters from the following institutions: ‐ LNEG. Laboratório Nacional de Energia e Geologia (National Geological Survey); ‐ IGP ‐ Instituto Geográfico Português (National Geographic Institute); ‐ IDL, Instituto Dom Luiz – Laboratório Associado ‐ CGE, Centro de Geofísica de Évora; ‐ FCTUC, Faculdade de Ciências e Tecnologia da Universidade de Coimbra; ‐ Instituto Superior de Engenharia de Lisboa; ‐ Instituto Superior Técnico; ‐ Universidade da Beira Interior. While Instituto de Meteorologia (Meteorological Institute, in charge of the national seismographic network) actively supports the national participation in EPOS, a letter of support was not feasible in view of the organic changes underway at the time. C3G aims at the integration and coordination, at national level, of existing Earth Sciences infrastructures, namely: ‐ seismic and geodetic networks (IM, IST, IDL, CGE); ‐ rock physics laboratories (ISEL); ‐ geophysical laboratories dedicated to natural resources and environmental studies; ‐ geological and geophysical data repositories; ‐ facilities for data storage and computing resources. The C3G - Collaboratory for Geology, Geodesy and Geophysics will be coordinated by Universidade da Beira Interior, whose Department of Informatics will host the C3G infrastructure.
Resumo:
This paper describes preliminary work on the generation of synthesis gas from water electrolysis using graphite electrodes without the separation of the generated gases. This is an innovative process, that has no similar work been done earlier. Preliminary tests allowed to establish correlations between the applied current to the electrolyser and flow rate and composition of the generated syngas, as well as a characterisation of generated carbon nanoparticles. The obtained syngas can further be used to produce synthetic liquid fuels, for example, methane, methanol or DME (dimethyl ether) in a catalytic reactor, in further stages of a present ongoing project, using the ELECTROFUEL® concept. The main competitive advantage of this project lies in the built-in of an innovative technology product, from RE (renewable energy) power in remote locations, for example, islands, villages in mountains as an alternative for energy storage for mobility constraints.
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.
Resumo:
The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.