959 resultados para geometric average
Resumo:
In this contribution, we investigate the low-temperature, low-density behaviour of dipolar hard-sphere (DHS) particles, i.e., hard spheres with dipoles embedded in their centre. We aim at describing the DHS fluid in terms of a network of chains and rings (the fundamental clusters) held together by branching points (defects) of different nature. We first introduce a systematic way of classifying inter-cluster connections according to their topology, and then employ this classification to analyse the geometric and thermodynamic properties of each class of defects, as extracted from state-of-the-art equilibrium Monte Carlo simulations. By computing the average density and energetic cost of each defect class, we find that the relevant contribution to inter-cluster interactions is indeed provided by (rare) three-way junctions and by four-way junctions arising from parallel or anti-parallel locally linear aggregates. All other (numerous) defects are either intra-cluster or associated to low cluster-cluster interaction energies, suggesting that these defects do not play a significant part in the thermodynamic description of the self-assembly processes of dipolar hard spheres. (C) 2013 AIP Publishing LLC.
Resumo:
OBJECTIVE To analyze the effect of air pollution and temperature on mortality due to cardiovascular and respiratory diseases. METHODS We evaluated the isolated and synergistic effects of temperature and particulate matter with aerodynamic diameter < 10 µm (PM10) on the mortality of individuals > 40 years old due to cardiovascular disease and that of individuals > 60 years old due to respiratory diseases in Sao Paulo, SP, Southeastern Brazil, between 1998 and 2008. Three methodologies were used to evaluate the isolated association: time-series analysis using Poisson regression model, bidirectional case-crossover analysis matched by period, and case-crossover analysis matched by the confounding factor, i.e., average temperature or pollutant concentration. The graphical representation of the response surface, generated by the interaction term between these factors added to the Poisson regression model, was interpreted to evaluate the synergistic effect of the risk factors. RESULTS No differences were observed between the results of the case-crossover and time-series analyses. The percentage change in the relative risk of cardiovascular and respiratory mortality was 0.85% (0.45;1.25) and 1.60% (0.74;2.46), respectively, due to an increase of 10 μg/m3 in the PM10 concentration. The pattern of correlation of the temperature with cardiovascular mortality was U-shaped and that with respiratory mortality was J-shaped, indicating an increased relative risk at high temperatures. The values for the interaction term indicated a higher relative risk for cardiovascular and respiratory mortalities at low temperatures and high temperatures, respectively, when the pollution levels reached approximately 60 μg/m3. CONCLUSIONS The positive association standardized in the Poisson regression model for pollutant concentration is not confounded by temperature, and the effect of temperature is not confounded by the pollutant levels in the time-series analysis. The simultaneous exposure to different levels of environmental factors can create synergistic effects that are as disturbing as those caused by extreme concentrations.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.
Resumo:
The aim of this work was to compare the evolution of chronic chagasic untreated patients (UTPs) with that of benznidazole or nifurtimox-treated patients (TPs). A longitudinal study from a low endemic area (Santa Fe city, Argentina) was performed during an average period of 14 years. Serological and parasitological analyses with clinical exams, ECG and X-chest ray were carried out. At the onset, 19/198 infected patients showed chagasic cardiomyopathy (CrChM) while 179 were asymptomatic. In this latter group the frequency of CrChM during the follow-up was lower in TPs compared with UTPs (3.2% vs 7%). Within the CrChM group, 2/5 TPs showed aggravated myopathy whereas this happened in 9/14 UTPs. Comparing the clinical evolution of all patients, 5.9% of TPs and 13% of UTPs had unfavourable evolution, but the difference is not statistically relevant. Serological titers were assessed by IIF. Titers equal to or lower than 1/64 were obtained in 86% of the TPs, but only in 38% of UTPs. The differences were statistically significant (geometric mean: 49.36 vs. 98.2). Antiparasitic assessment of the drugs (xenodiagnosis) proved to be effective. The low sensitivity in chronic chagasic patients must be born in mind. Despite treated patients showed a better clinical evolution and lower antibody levels than untreated ones, it is necessary to carry on doing research in order to improve therapeutic guidelines, according to the risk/benefit equation and based on scientific and ethical principles.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Química e Bioquímica
Resumo:
The oldest Portuguese share index still being calculated is the BVL/PSI-General, one which started the daily series on 5/Jan/1988 with a base value of 1000 points. Everyday a single value is computed based on the closing prices of all the shares included in the sample. Also, all corporate events affecting the price of any share beyond market sentiment are taken into account through proper adjustments, either in the numerator or the denominator of the formula. However, for dates before January 1988, there is nothing comparable to this index since the two different series known either never disclosed the methodology adopted to calculate the index or followed solutions not compatible with the above index. The present paper explains the solutions adopted to replicate as closely as possible the methodology of the BVL-General index to the main market of the Lisbon Exchange for the period 1978 – 1987. This is the first estimate of the historical Equity Risk Premium in Portugal above short-term risk-free rate from the re-opening of the market following the Carnation Revolution (and the accompanying nationalizations), to the present. In showing a value of the same order of magnitude found in other countries, the paper invites further studies on the effects of political decisions such as privatizations and joining the European Union.
Resumo:
Recaí sob a responsabilidade da Marinha Portuguesa a gestão da Zona Económica Exclusiva de Portugal, assegurando a sua segurança da mesma face a atividades criminosas. Para auxiliar a tarefa, é utilizado o sistema Oversee, utilizado para monitorizar a posição de todas as embarcações presentes na área afeta, permitindo a rápida intervenção da Marinha Portuguesa quando e onde necessário. No entanto, o sistema necessita de transmissões periódicas constantes originadas nas embarcações para operar corretamente – casos as transmissões sejam interrompidas, deliberada ou acidentalmente, o sistema deixa de conseguir localizar embarcações, dificultando a intervenção da Marinha. A fim de colmatar esta falha, é proposto adicionar ao sistema Oversee a capacidade de prever as posições futuras de uma embarcação com base no seu trajeto até à cessação das transmissões. Tendo em conta os grandes volumes de dados gerados pelo sistema (históricos de posições), a área de Inteligência Artificial apresenta uma possível solução para este problema. Atendendo às necessidades de resposta rápida do problema abordado, o algoritmo de Geometric Semantic Genetic Programming baseado em referências de Vanneschi et al. apresenta-se como uma possível solução, tendo já produzido bons resultados em problemas semelhantes. O presente trabalho de tese pretende integrar o algoritmo de Geometric Semantic Genetic Programming desenvolvido com o sistema Oversee, a fim de lhe conceder capacidades preditivas. Adicionalmente, será realizado um processo de análise de desempenho a fim de determinar qual a ideal parametrização do algoritmo. Pretende-se com esta tese fornecer à Marinha Portuguesa uma ferramenta capaz de auxiliar o controlo da Zona Económica Exclusiva Portuguesa, permitindo a correta intervenção da Marinha em casos onde o atual sistema não conseguiria determinar a correta posição da embarcação em questão.
Resumo:
This paper aims at developing a collision prediction model for three-leg junctions located in national roads (NR) in Northern Portugal. The focus is to identify factors that contribute for collision type crashes in those locations, mainly factors related to road geometric consistency, since literature is scarce on those, and to research the impact of three modeling methods: generalized estimating equations, random-effects negative binomial models and random-parameters negative binomial models, on the factors of those models. The database used included data published between 2008 and 2010 of 177 three-leg junctions. It was split in three groups of contributing factors which were tested sequentially for each of the adopted models: at first only traffic, then, traffic and the geometric characteristics of the junctions within their area of influence; and, lastly, factors which show the difference between the geometric characteristics of the segments boarding the junctionsâ area of influence and the segment included in that area were added. The choice of the best modeling technique was supported by the result of a cross validation made to ascertain the best model for the three sets of researched contributing factors. The models fitted with random-parameters negative binomial models had the best performance in the process. In the best models obtained for every modeling technique, the characteristics of the road environment, including proxy measures for the geometric consistency, along with traffic volume, contribute significantly to the number of collisions. Both the variables concerning junctions and the various national highway segments in their area of influence, as well as variations from those characteristics concerning roadway segments which border the already mentioned area of influence have proven their relevance and, therefore, there is a rightful need to incorporate the effect of geometric consistency in the three-leg junctions safety studies.
Resumo:
OBJECTIVE: To identiy left ventricular geometric patterns in hypertensive patients on echocardiography, and to correlate those patterns with casual blood pressure measurements and with the parameters obtained on a 24-hour ambulatory blood pressure monitoring. METHODS: We studied sixty hypertensive patients, grouped according to the Joint National Committee stages of hypertension.. Using the single- and two-dimensional Doppler Echocardiography, we analyzed the left ventricular mass and the geometric patterns through the correlation of left ventricular mass index and relative wall thickness. On ambulatory blood pressure monitoring we assessed the means and pressure loads in the different geometric patterns detected on echocardiography RESULTS: We identified three left ventricular geometric patterns: 1) concentric hypertrophy, in 25% of the patients; 2) concentric remodeling, in 25%; and 3) normal geometry, in 50%. Casual systolic blood pressure was higher in the group with concentric hypertrophy than in the other groups (p=0.001). Mean systolic pressure in the 24h, daytime and nighttime periods was also higher in patients with concentric hypertrophy, as compared to the other groups (p=0.003, p=0.004 and p=0.007). Daytime systolic load and nighttime diastolic load were higher in patients with concentric hypertrophy ( p=0.004 and p=0.01, respectively). CONCLUSIONS: Left ventricular geometric patterns show significant correlation with casual systolic blood pressure, and with means and pressure loads on ambulatory blood pressure monitoring.
Resumo:
PURPOSE: To evaluate 2 left ventricular mass index (LVMI) normality criteria for the prevalence of left ventricular geometric patterns in a hypertensive population ( HT ) . METHODS: 544 essential hypertensive patients, were evaluated by echocardiography, and different left ventricular hypertrophy criteria were applied: 1 - classic : men - 134 g/m² and women - 110 g/m² ; 2- obtained from the 95th percentil of LVMI from a normotensive population (NT). RESULTS: The prevalence of 4 left ventricular geometric patterns, respectively for criteria 1 and 2, were: normal geometry - 47.7% and 39.3%; concentric remodelying - 25.4% and 14.3%; concentric hypertrophy - 18.4% and 27.7% and excentric hypertrophy - 8.8% and 16.7%, which confered abnormal geometry to 52.6% and 60.7% of hypertensive. The comparative analysis between NT and normal geometry hypertensive group according to criteria 1, detected significative stuctural differences,"( *p < 0.05):LVMI- 78.4 ± 1.50 vs 85.9 ±0.95 g/m² *; posterior wall thickness -8.5 ± 0.1 vs 8.9 ± 0.05 mm*; left atrium - 33.3 ± 0.41 vs 34.7 ± 0.30 mm *. With criteria 2, significative structural differences between the 2 groups were not observed. CONCLUSION: The use of a reference population based criteria, increased the abnormal left ventricular geometry prevalence in hypertensive patients and seemed more appropriate for left ventricular hypertrophy detection and risk stratification.
Resumo:
Magdeburg, Univ., Fak. für Naturwiss., Diss., 2012
Resumo:
Crustacean growth studies typically use modal analysis rather than focusing on the growth of individuals. In the present work, we use geometric morphometrics to determine how organism shape and size varies during the life of the freshwater crab, Aegla uruguayana Schmitt, 1942. A total of 66 individuals from diverse life cycle stages were examined daily and each exuvia was recorded. Digital images of the dorsal region of the cephalothorax were obtained for each exuvia and were subsequently used to record landmark configurations. Moult increment and intermoult period were estimated for each crab. Differences in shape between crabs of different sizes (allometry) and sexes (sexual dimorphism; SD) were observed. Allometry was registered among specimens; however, SD was not statistically significant between crabs of a given size. The intermoult period increased as size increased, but the moult frequency was similar between the sexes. Regarding ontogeny, juveniles had short and blunt rostrum, robust forehead region, and narrow cephalothorax. Unlike juveniles crabs, adults presented a well-defined anterior and posterior cephalothorax region. The rostrum was long and stylised and the forehead narrow. Geometric morphometric methods were highly effective for the analysis of aeglid-individual- growth and avoided excessive handling of individuals through exuvia analysis.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."