973 resultados para One-point Quadrature
Resumo:
Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.
Resumo:
We investigate the combined influence of quenched randomness and dissipation on a quantum critical point with O(N) order-parameter symmetry. Utilizing a strong-disorder renormalization group, we determine the critical behavior in one space dimension exactly. For super-ohmic dissipation, we find a Kosterlitz-Thouless type transition with conventional (power-law) dynamical scaling. The dynamical critical exponent depends on the spectral density of the dissipative baths. We also discuss the Griffiths singularities, and we determine observables.
Resumo:
In this work an efficient third order non-linear finite difference scheme for solving adaptively hyperbolic systems of one-dimensional conservation laws is developed. The method is based oil applying to the solution of the differential equation an interpolating wavelet transform at each time step, generating a multilevel representation for the solution, which is thresholded and a sparse point representation is generated. The numerical fluxes obtained by a Lax-Friedrichs flux splitting are evaluated oil the sparse grid by an essentially non-oscillatory (ENO) approximation, which chooses the locally smoothest stencil among all the possibilities for each point of the sparse grid. The time evolution of the differential operator is done on this sparse representation by a total variation diminishing (TVD) Runge-Kutta method. Four classical examples of initial value problems for the Euler equations of gas dynamics are accurately solved and their sparse solutions are analyzed with respect to the threshold parameters, confirming the efficiency of the wavelet transform as an adaptive grid generation technique. (C) 2008 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Let M -> B, N -> B be fibrations and f(1), f(2): M -> N be a pair of fibre-preserving maps. Using normal bordism techniques we define an invariant which is an obstruction to deforming the pair f(1), f(2) over B to a coincidence free pair of maps. In the special case where the two fibrations axe the same and one of the maps is the identity, a weak version of our omega-invariant turns out to equal Dold`s fixed point index of fibre-preserving maps. The concepts of Reidemeister classes and Nielsen coincidence classes over B are developed. As an illustration we compute e.g. the minimal number of coincidence components for all homotopy classes of maps between S(1)-bundles over S(1) as well as their Nielsen and Reidemeister numbers.
Resumo:
The extracellular hemoglobin from Glossoscolex paulistus (HbGp) has a molecular mass of 3.6 M Da, It has a high oligomeric stability at pH 7.0 and low autoxidation rates, as compared to vertebrate hemoglobins. In this work, fluorescence and light scattering experiments were performed with the three oxidation forms of HbGp exposed to acidic pH. Our focus is on the HbGp stability at acidic pH and also on the determination of the isoelectric point (pI) of the protein. Our results show that the protein in the cyanomet form is more stable than in the other two forms, in the whole range. Our zeta-potential data are consistent with light scattering results. Average values apt obtained by different techniques were 5.6 +/- 0.5, 5.4 +/- 0.2 and 5.2 +/- 0.5 for the oxy, met, and cyanomet forms. Dynamic light scattering (DLS) experiments have shown that, at pH 6.0, the aggregation (oligomeric) state of oxy-, met- and cyanomet-HbGp remains the same as that at 7.0. The interaction between the oxy-HbGp and ionic surfactants at pH 5.0 and 6.0 was also monitored in the present study. At pH 5,0, below the protein pI, the effects of sodium dodecyl sulfate (SDS) and cetyltrimethylammonium chloride (CTAC) are inverted when compared to pH 7.0. For CTAC, in acid pH 5.0, no precipitation is observed, while for SDS an intense light scattering appears due to a precipitation process. HbGp interacts strongly with the cationic surfactant at pH 7.0 and with the anionic one at pH 5.0. This effect is due to the predominance, in the protein surface, of residues presenting opposite charges to the surfactant headgroups. This information can be relevant for the development of extracellular hemoglobin-based artificial blood substitutes.
Resumo:
Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
Resumo:
Since the 1970s, the world observes the fragmentation, hybridity, plurality and miscegenation that are taking over the scenic arts. The contemporary poetry feels free of the classical rules; theater no longer obeys the requirements of the poetic "manuals"; the rigid boundaries between genres disappears; artists cease to represent to the public to talk with him. In the last decades of twentieth century and in the twenty-first century, emerges the laughable phenomenon of One-man Show in the brazilian scene, object of this research, as a result of this evolution of the performing arts. It is a form of theater that emerged in the brazilian context, snatching public attention in alternative spaces, theaters and, as it should be, also on the Internet, often confused with the Stand-up Comedy. It is necessary a research that delimitate and pursue to identify the essential characteristics of the brazilian One- Man Show, not only by the absence of theoretical references concerning this, but also to understand some aspects of the brazilian scene and the situation of laughter and comedy in it. In the first chapter, a discussion about comedy and laughter in classical antiquity is presented, using the writings of Plato and Aristotle as a starting point; in the second, some of the main classical theories of laughter are reviewed, attempting to identify the general characteristics that enable to understand the construction of the comedy; the third chapter generally dicusses about the moment of the brazilian theatrical scene in which emerges the One-man Show; and in the fourth chapter, there is an explanation about this phenomenon and a description of the practical exercise titled Experimento One-person Show: Damas
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper presents an interior point method for the long-term generation scheduling of large-scale hydrothermal systems. The problem is formulated as a nonlinear programming one due to the nonlinear representation of hydropower production and thermal fuel cost functions. Sparsity exploitation techniques and an heuristic procedure for computing the interior point method search directions have been developed. Numerical tests in case studies with systems of different dimensions and inflow scenarios have been carried out in order to evaluate the proposed method. Three systems were tested, with the largest being the Brazilian hydropower system with 74 hydro plants distributed in several cascades. Results show that the proposed method is an efficient and robust tool for solving the long-term generation scheduling problem.
Resumo:
The parameterized fast decoupled power flow (PFDPF), versions XB and BX, using either theta or V as a parameter have been proposed by the authors in Part I of this paper. The use of reactive power injection of a selected PVbus (Q(PV)) as the continuation parameter for the computation of the maximum loading point (MLP) was also investigated. In this paper, the proposed versions obtained only with small modifications of the conventional one are used for the computation of the MLP of IEEE test systems (14, 30, 57 and 118 buses). These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approaches are presented and discussed. The results show that the characteristics of the conventional FDPF method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that these versions can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. A trivial secant predictor, the modified zero-order polynomial, which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used for the predictor step. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This study presents the Environmental Sensibility Mapping to oil spillings on the Potengi estuary - RN and neighboring coastline through remote sensing data, collecting, treatment and integration of the geomorphologic, oceanographic (temperature, salinity, density, direction and intensity), meteorological (wind speed and direction) and high resolution seismic (bathymetry and sonography) data. The Potengi river estuary is located on the eastern coastline of the Rio Grande do Norte State, being inserted in the geological context for the coastal Pernambuco-Paraíba basin and spreading over 18 km; it shelters the Natal harbor zone and an oil terminal, centralizing, therefore, important oil transport operations that can cause accidental spillings. Under the oceanographic point of view, the Potengi estuary is characterized by the absence of any expressive thermic stratification, being classified as partially mixed, B type according to Pritchard (1955), and 2 type in conformity to the stratification-circulation diagram by Hansen & Rattray (1966). Two main wind systems are responsable for the formation of wave sets that occur in the area. The dynamic tide presents, in the Natal Harbor, mean amplitude in spring and quadrature tides, with around 2.8 and 2m, respectively. The mechanism of saline tide mixing was defined through the salinity which is the main parameter for the identification of this mechanism. Important variations of the salinity mean values (36.32 psu), temperature (28.11ºC) and density (22.96 kg/m3) in the estuarine waters presented features belonging to low latitude regions. The water temperature follows the air temperature variations, in the region, with expressive daily amplitudes. In this study, the identification of the estuarine bed morphology through bathymetric and sonographic analysis, had the purpose to evaluate the influence of the superficial and bottom currents for the bottom shaping. In this way, the use of the side scan sonar showed, to be very useful in the identification of the bottom morphology and its relationship with the predominant action of the tidal currents in the Potengi estuary. Besides, it showed how the sonograms can be a support to the comparison of the several patterns derived from the local hydrodynamic variations. The holocene sediments, which fill the estuarine channel, are predominantly sandy, varying from selected, sometimes silty. The sedimentation is controlled by the environmental hydrodynamic conditions, being recognized two important textural facies: Muddy Facies and Sandy Facies. The distribution of these textural facies apparently oscillates owing to the tidal cycle and flow intensity. Each one of the above mentioned data was integrated in a Geographic Information System (GIS), from which was produced the Environmental Sensibility Map to oil spillings with Coastal Sensibility Index (CSI) to the Potengi estuary. The integrated analysis of these data is essential to oil spilling contingency plans, in order to reduce the spilling environmental consequences and to make efficient the endeavours of contention and cleaning up/removal on the Natal Harbor. This study has the aim to collaborate for the increase of informations about the estuarine environment and contribute to a better management of the question: environment/polluting loads