10 resultados para Paterson Region (N.J.)--Maps, Outline and base.

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of economic systems has generated deep interest in exploring the complexity of chaotic motions in economy. Due to important developments in nonlinear dynamics, the last two decades have witnessed strong revival of interest in nonlinear endogenous business chaotic models. The inability to predict the behavior of dynamical systems in the presence of chaos suggests the application of chaos control methods, when we are more interested in obtaining regular behavior. In the present article, we study a specific economic model from the literature. More precisely, a system of three ordinary differential equations gather the variables of profits, reinvestments and financial flow of borrowings in the structure of a firm. Firstly, using results of symbolic dynamics, we characterize the topological entropy and the parameter space ordering of kneading sequences, associated with one-dimensional maps that reproduce significant aspects of the model dynamics. The analysis of the variation of this numerical invariant, in some realistic system parameter region, allows us to quantify and to distinguish different chaotic regimes. Finally, we show that complicated behavior arising from the chaotic firm model can be controlled without changing its original properties and the dynamics can be turned into the desired attracting time periodic motion (a stable steady state or into a regular cycle). The orbit stabilization is illustrated by the application of a feedback control technique initially developed by Romeiras et al. [1992]. This work provides another illustration of how our understanding of economic models can be enhanced by the theoretical and numerical investigation of nonlinear dynamical systems modeled by ordinary differential equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the Lorenz links generated by renormalizable Lorenz maps with reducible kneading invariant (K(f)(-), = K(f)(+)) = (X, Y) * (S, W) in terms of the links corresponding to each factor. This gives one new kind of operation that permits us to generate new knots and links from the ones corresponding to the factors of the *-product. Using this result we obtain explicit formulas for the genus and the braid index of this renormalizable Lorenz knots and links. Then we obtain explicit formulas for sequences of these invariants, associated to sequences of renormalizable Lorenz maps with kneading invariant (X, Y) * (S,W)*(n), concluding that both grow exponentially. This is specially relevant, since it is known that topological entropy is constant on the archipelagoes of renormalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past decade, scientists have been called to participate more actively in public education and outreach (E&O). This is particularly true in fields of significant societal impact, such as earthquake science. Local earthquake risk culture plays a role in the way that the public engages in educational efforts. In this article, we describe an adapted E&O program for earthquake science and risk. The program is tailored for a region of slow tectonic deformation, where large earthquakes are extreme events that occur with long return periods. The adapted program has two main goals: (1) to increase the awareness and preparedness of the population to earthquake and related risks (tsunami, liquefaction, fires, etc.), and (2) to increase the quality of earthquake science education, so as to attract talented students to geosciences. Our integrated program relies on activities tuned for different population groups who have different interests and abilities, namely young children, teenagers, young adults, and professionals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of the Miocene through Present compression in the Tagus Abyssal Plain are mapped using the most up to date available to scientific community multi-channel seismic reflection and refraction data. Correlation of the rift basin fault pattern with the deep crustal structure is presented along seismic line IAM-5. Four structural domains were recognized. In the oceanic realm mild deformation concentrates in Domain I adjacent to the Tore-Madeira Rise. Domain 2 is characterized by the absence of shortening structures, except near the ocean-continent transition (OCT), implying that Miocene deformation did not propagate into the Abyssal Plain, In Domain 3 we distinguish three sub-domains: Sub-domain 3A which coincides with the OCT, Sub-domain 3B which is a highly deformed adjacent continental segment, and Sub-domain 3C. The Miocene tectonic inversion is mainly accommodated in Domain 3 by oceanwards directed thrusting at the ocean-continent transition and continentwards on the continental slope. Domain 4 corresponds to the non-rifted continental margin where only minor extensional and shortening deformation structures are observed. Finite element numerical models address the response of the various domains to the Miocene compression, emphasizing the long-wavelength differential vertical movements and the role of possible rheologic contrasts. The concentration of the Miocene deformation in the transitional zone (TC), which is the addition of Sub-domain 3A and part of 3B, is a result of two main factors: (1) focusing of compression in an already stressed region due to plate curvature and sediment loading; and (2) theological weakening. We estimate that the frictional strength in the TC is reduced in 30% relative to the surrounding regions. A model of compressive deformation propagation by means of horizontal impingement of the middle continental crust rift wedge and horizontal shearing on serpentinized mantle in the oceanic realm is presented. This model is consistent with both the geological interpretation of seismic data and the results of numerical modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Throughout the world, epidemiological studies were established to examine the relationship between air pollution and mortality rates and adverse respiratory health effects. However, despite the years of discussion the correlation between adverse health effects and atmospheric pollution remains controversial, partly because these studies are frequently restricted to small and well-monitored areas. Monitoring air pollution is complex due to the large spatial and temporal variations of pollution phenomena, the high costs of recording instruments, and the low sampling density of a purely instrumental approach. Therefore, together with the traditional instrumental monitoring, bioindication techniques allow for the mapping of pollution effects over wide areas with a high sampling density. In this study, instrumental and biomonitoring techniques were integrated to support an epidemiological study that will be developed in an industrial area located in Gijon in the coastal of central Asturias, Spain. Three main objectives were proposed to (i) analyze temporal patterns of PM10 concentrations in order to apportion emissions sources, (ii) investigate spatial patterns of lichen conductivity to identify the impact of the studied industrial area in air quality, and (iii) establish relationships amongst lichen conductivity with some site-specific characteristics. Samples of the epiphytic lichen Parmelia sulcata were transplanted in a grid of 18 by 20 km with an industrial area in the center. Lichens were exposed for a 5-mo period starting in April 2010. After exposure, lichen samples were soaked in 18-MΩ water aimed at determination of water electrical conductivity and, consequently, lichen vitality and cell damage. A marked decreasing gradient of lichens conductivity relative to distance from the emitting sources was observed. Transplants from a sampling site proximal to the industrial area reached values 10-fold higher than levels far from it. This finding showed that lichens reacted physiologically in the polluted industrial area as evidenced by increased conductivity correlated to contamination level. The integration of temporal PM10 measurements and analysis of wind direction corroborated the importance of this industrialized region for air quality measurements and identified the relevance of traffic for the urban area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have identified an allelic deletion common region in the q26 region of chromosome 10 in endometrial carcinomas, which has been reported previously as a potential target of genetic alterations related to this neoplasia. An allelotyping analysis of 19 pairs of tumoral and non-tumoral samples was accomplished using seven microsatellite polymorphic markers mapping in the 10q26 chromosomal region. Loss of heterozygosity for one or more loci was detected in 29% of the endometrial carcinoma samples. The observed pattern of loss enabled the identification of a 3.5 Mb common deleted region located between the D10S587 and D10S186 markers. An additional result from an endometrial sample with evidence of a RER phenotype may suggest a more centromeric region of loss within the above-mentioned interval. This 401.84 Kb interval flanked by the D10S587 and D10S216 markers may be a plausible location for a putative suppressor gene involved in early stage endometrial carcinogenesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular - Ramo de especialização: Intervenção Cardiovascular

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O principal objectivo desta tese é obter uma relação directa entre a composição dos gases liquefeitos de petróleo (GLP), propano, n-butano e isobutano, usados como aerossóis propulsores numa lata de poliuretano de um componente, com as propriedades das espumas produzidas por spray. As espumas obtidas, terão de ter como requisito principal, um bom desempenho a temperaturas baixas, -10ºC, sendo por isso designadas por espumas de Inverno. Uma espuma é considerada como tendo um bom desempenho se não apresentar a -10/-10ºC (temperatura lata/ spray) glass bubbles, base holes e cell collapse. As espumas deverão ainda ter densidades do spray no molde a +23/+23ºC abaixo dos 30 g/L, um rendimento superior a 30 L, boa estabilidade dimensional e um caudal de espuma a +5/+5ºC superior a 5 g/s. Os ensaios experimentais foram realizados a +23/+23ºC, +5/+5ºC e a -10/-10ºC. A cada temperatura, as espumas desenvolvidas, foram submetidas a testes que permitiram determinar a sua qualidade. Testes esses que incluem os designados por Quick Tests (QT): o spray no papel e no molde das espumas nas referidas temperaturas. As amostras do papel e no molde são especialmente analisadas, quanto, às glass bubbles, cell collapse, base holes, cell structur e, cutting shrinkage, para além de outras propriedades. Os QT também incluem a análise da densidade no molde (ODM) e o estudo do caudal de espumas. Além dos QT foram realizados os testes da estabilidade dimensional das espumas, testes físicos de compressão e adesão, testes de expansão das espumas após spray e do rendimento por lata de espuma. Em todos os ensaios foi utilizado um tubo adaptador colocado na válvula da lata como método de spray e ainda mantida constante a proporção das matérias-primas (excepto os gases, em estudo). As experiências iniciaram-se com o estudo de GLPs presentes no mercado de aerossóis. Estes resultaram que o GLP: propano/ n-butano/ isobutano: (30/ 0/ 70 w/w%), produz as melhores espumas de inverno a -10/-10ºC, reduzindo desta forma as glass bubbles, base holes e o cell collapse produzido pelos restantes GLP usados como aerossóis nas latas de poliuretano. Testes posteriores tiveram como objectivo estudar a influência directa de cada gás, propano, n-butano e isobutano nas espumas. Para tal, foram usadas duas referências do estudo com GLP comercializáveis, 7396 (30 /0 /70 w/w %) e 7442 (0/ 0/ 100 w/w %). Com estes resultados concluí-se que o n-butano produz más propriedades nas espumas a -10/- 10ºC, formando grandes quantidades de glass bubbles, base holes e cell collapse. Contudo, o uso de propano reduz essas glass bubbles, mas em contrapartida, forma cell collapse.Isobutano, porém diminui o cell collapse mas não as glass bubbles. Dos resultados experimentais podemos constatar que o caudal a +5/+5ºC e densidade das espumas a +23/+23ºC, são influenciados pela composição do GLP. O propano e n-butano aumentam o caudal de espuma das latas e a sua densidade, ao contrário com o que acontece com o isobutano. Todavia, pelos resultados obtidos, o isobutano proporciona os melhores rendimentos de espumas por lata. Podemos concluir que os GLPs que contivessem cerca de 30 w/w % de propano (bons caudais a +5/+5ºC e menos glass bubbles a -10/-10ºC), e cerca 70 w/w % de isobutano (bons rendimentos de espumas, bem como menos cell collapse a -10/-10ºC) produziam as melhores espumas. Também foram desenvolvidos testes sobre a influência da quantidade de gás GLP presente numa lata. A análise do volume de GLP usado, foi realizada com base na melhor espuma obtida nos estudos anteriores, 7396, com um GLP (30 / 0/ 70 w/w%), e foram feitas alterações ao seu volume gás GLP presente no pré-polímero. O estudo concluiu, que o aumento do volume pode diminuir a densidade das espumas, e o seu decréscimo, um aumento da densidade. Também indico u que um mau ajuste do volume poderá causar más propriedades nas espumas. A análise económica, concluiu que o custo das espumas com mais GLP nas suas formulações, reduz-se em cerca de 3%, a quando de um aumento do volume de GLP no pré-polímero de cerca de 8 %. Esta diminuição de custos deveu-se ao facto, de um aumento de volume de gás, implicar uma diminuição na quantidade das restantes matérias-primas, com custos superiores, já que o volume útil total da lata terá de ser sempre mantido nos 750 mL. Com o objectivo de melhorar a qualidade da espuma 7396 (30/0/70 w/w %) obtida nos ensaios anteriores adicionou-se à formulação 7396 o HFC-152a (1,1-di fluoroetano). Os resultados demonstram que se formam espumas com más propriedades, especialmente a -10/-10ºC, contudo proporcionou excelentes shaking rate da lata. Através de uma pequena análise de custos não é aconselhável o seu uso pelos resultados obtidos, não proporcionando um balanço custo/benefício favorável. As três melhores espumas obtidas de todos os estudos foram comparadas com uma espuma de inverno presente no mercado. 7396 e 7638 com um volume de 27 % no prépolímero e uma composição de GLP (30/ 0 / 70 w/w%) e (13,7/ 0/ 86,3 w/w%), respectivamente, e 7690, com 37 % de volume no pré-polímero e GLP (30/ 0 / 70 w/w%), apresentaram em geral melhores resultados, comparando com a espuma benchmark . Contudo, os seus shaking rate a -10/-10ºC, de cada espuma, apresentaram valores bastante inferiores à composição benchmarking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an IEEE 802.11p full-stack prototype implementation to data exchange among vehicles and between vehicles and the roadway infrastructures. The prototype architecture is based on FPGAs for Intermediate Frequency (IF) and base band purposes, using 802.11a based transceivers for RF interfaces. Power amplifiers were also addressed, by using commercial and in-house solutions. This implementation aims to provide technical solutions for Intelligent Transportation Systems (ITS) field, namely for tolling and traffic management related services, in order to promote safety, mobility and driving comfort through the dynamic and real-time cooperation among vehicles and/or between vehicles and infrastructures. The performance of the proposed scheme is tested under realistic urban and suburban driving conditions. Preliminary results are promising, since they comply with most of the 802.11p standard requirements.