964 resultados para Two-point
Resumo:
We give a comprehensive analysis of the Euler-Jacobi problem of motion in the field of two fixed centers with arbitrary relative strength and for positive values of the energy. These systems represent nontrivial examples of integrable dynamics and are analysed from the point of view of the energy-momentum mapping from the phase space to the space of the integration constants. In this setting, we describe the structure of the scattering trajectories in phase space and derive an explicit description of the bifurcation diagram, i.e., the set of critical value of the energy-momentum map.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
The paper studies a class of a system of linear retarded differential difference equations with several parameters. It presents some sufficient conditions under which no stability changes for an equilibrium point occurs. Application of these results is given. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.
Resumo:
A continuous version of the hierarchical spherical model at dimension d=4 is investigated. Two limit distributions of the block spin variable X(gamma), normalized with exponents gamma = d + 2 and gamma=d at and above the critical temperature, are established. These results are proven by solving certain evolution equations corresponding to the renormalization group (RG) transformation of the O(N) hierarchical spin model of block size L(d) in the limit L down arrow 1 and N ->infinity. Starting far away from the stationary Gaussian fixed point the trajectories of these dynamical system pass through two different regimes with distinguishable crossover behavior. An interpretation of this trajectories is given by the geometric theory of functions which describe precisely the motion of the Lee-Yang zeroes. The large-N limit of RG transformation with L(d) fixed equal to 2, at the criticality, has recently been investigated in both weak and strong (coupling) regimes by Watanabe (J. Stat. Phys. 115:1669-1713, 2004) . Although our analysis deals only with N = infinity case, it complements various aspects of that work.
Resumo:
We present a variable time step, fully adaptive in space, hybrid method for the accurate simulation of incompressible two-phase flows in the presence of surface tension in two dimensions. The method is based on the hybrid level set/front-tracking approach proposed in [H. D. Ceniceros and A. M. Roma, J. Comput. Phys., 205, 391400, 2005]. Geometric, interfacial quantities are computed from front-tracking via the immersed-boundary setting while the signed distance (level set) function, which is evaluated fast and to machine precision, is used as a fluid indicator. The surface tension force is obtained by employing the mixed Eulerian/Lagrangian representation introduced in [S. Shin, S. I. Abdel-Khalik, V. Daru and D. Juric, J. Comput. Phys., 203, 493-516, 2005] whose success for greatly reducing parasitic currents has been demonstrated. The use of our accurate fluid indicator together with effective Lagrangian marker control enhance this parasitic current reduction by several orders of magnitude. To resolve accurately and efficiently sharp gradients and salient flow features we employ dynamic, adaptive mesh refinements. This spatial adaption is used in concert with a dynamic control of the distribution of the Lagrangian nodes along the fluid interface and a variable time step, linearly implicit time integration scheme. We present numerical examples designed to test the capabilities and performance of the proposed approach as well as three applications: the long-time evolution of a fluid interface undergoing Rayleigh-Taylor instability, an example of bubble ascending dynamics, and a drop impacting on a free interface whose dynamics we compare with both existing numerical and experimental data.
Resumo:
Let M -> B, N -> B be fibrations and f(1), f(2): M -> N be a pair of fibre-preserving maps. Using normal bordism techniques we define an invariant which is an obstruction to deforming the pair f(1), f(2) over B to a coincidence free pair of maps. In the special case where the two fibrations axe the same and one of the maps is the identity, a weak version of our omega-invariant turns out to equal Dold`s fixed point index of fibre-preserving maps. The concepts of Reidemeister classes and Nielsen coincidence classes over B are developed. As an illustration we compute e.g. the minimal number of coincidence components for all homotopy classes of maps between S(1)-bundles over S(1) as well as their Nielsen and Reidemeister numbers.
Resumo:
The extracellular hemoglobin from Glossoscolex paulistus (HbGp) has a molecular mass of 3.6 M Da, It has a high oligomeric stability at pH 7.0 and low autoxidation rates, as compared to vertebrate hemoglobins. In this work, fluorescence and light scattering experiments were performed with the three oxidation forms of HbGp exposed to acidic pH. Our focus is on the HbGp stability at acidic pH and also on the determination of the isoelectric point (pI) of the protein. Our results show that the protein in the cyanomet form is more stable than in the other two forms, in the whole range. Our zeta-potential data are consistent with light scattering results. Average values apt obtained by different techniques were 5.6 +/- 0.5, 5.4 +/- 0.2 and 5.2 +/- 0.5 for the oxy, met, and cyanomet forms. Dynamic light scattering (DLS) experiments have shown that, at pH 6.0, the aggregation (oligomeric) state of oxy-, met- and cyanomet-HbGp remains the same as that at 7.0. The interaction between the oxy-HbGp and ionic surfactants at pH 5.0 and 6.0 was also monitored in the present study. At pH 5,0, below the protein pI, the effects of sodium dodecyl sulfate (SDS) and cetyltrimethylammonium chloride (CTAC) are inverted when compared to pH 7.0. For CTAC, in acid pH 5.0, no precipitation is observed, while for SDS an intense light scattering appears due to a precipitation process. HbGp interacts strongly with the cationic surfactant at pH 7.0 and with the anionic one at pH 5.0. This effect is due to the predominance, in the protein surface, of residues presenting opposite charges to the surfactant headgroups. This information can be relevant for the development of extracellular hemoglobin-based artificial blood substitutes.
Resumo:
BACKGROUND: International organisations, e.g. WHO, stress the importance of competent registered nurses (RN) for the safety and quality of healthcare systems. Low competence among RNs has been shown to increase the morbidity and mortality of inpatients. OBJECTIVES: To investigate self-reported competence among nursing students on the point of graduation (NSPGs), using the Nurse Professional Competence (NPC) Scale, and to relate the findings to background factors. METHODS AND PARTICIPANTS: The NPC Scale consists of 88 items within eight competence areas (CAs) and two overarching themes. Questions about socio-economic background and perceived overall quality of the degree programme were added. In total, 1086 NSPGs (mean age, 28.1 [20-56]years, 87.3% women) from 11 universities/university colleges participated. RESULTS: NSPGs reported significantly higher scores for Theme I "Patient-Related Nursing" than for Theme II "Organisation and Development of Nursing Care". Younger NSPGs (20-27years) reported significantly higher scores for the CAs "Medical and Technical Care" and "Documentation and Information Technology". Female NSPGs scored significantly higher for "Value-Based Nursing". Those who had taken the nursing care programme at upper secondary school before the Bachelor of Science in Nursing (BSN) programme scored significantly higher on "Nursing Care", "Medical and Technical Care", "Teaching/Learning and Support", "Legislation in Nursing and Safety Planning" and on Theme I. Working extra paid hours in healthcare alongside the BSN programme contributed to significantly higher self-reported scores for four CAs and both themes. Clinical courses within the BSN programme contributed to perceived competence to a significantly higher degree than theoretical courses (93.2% vs 87.5% of NSPGs). SUMMARY AND CONCLUSION: Mean scores reported by NSPGs were highest for the four CAs connected with patient-related nursing and lowest for CAs relating to organisation and development of nursing care. We conclude that the NPC Scale can be used to identify and measure aspects of self-reported competence among NSPGs.
Resumo:
Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
Resumo:
We characterize optimal policy in a two-sector growth model with xed coeÆcients and with no discounting. The model is a specialization to a single type of machine of a general vintage capital model originally formulated by Robinson, Solow and Srinivasan, and its simplicity is not mirrored in its rich dynamics, and which seem to have been missed in earlier work. Our results are obtained by viewing the model as a specific instance of the general theory of resource allocation as initiated originally by Ramsey and von Neumann and brought to completion by McKenzie. In addition to the more recent literature on chaotic dynamics, we relate our results to the older literature on optimal growth with one state variable: speci cally, to the one-sector setting of Ramsey, Cass and Koopmans, as well as to the two-sector setting of Srinivasan and Uzawa. The analysis is purely geometric, and from a methodological point of view, our work can be seen as an argument, at least in part, for the rehabilitation of geometric methods as an engine of analysis.
Resumo:
Latin America has recently experienced three cycles of capital inflows, the first two ending in major financial crises. The first took place between 1973 and the 1982 ‘debt-crisis’. The second took place between the 1989 ‘Brady bonds’ agreement (and the beginning of the economic reforms and financial liberalisation that followed) and the Argentinian 2001/2002 crisis, and ended up with four major crises (as well as the 1997 one in East Asia) — Mexico (1994), Brazil (1999), and two in Argentina (1995 and 2001/2). Finally, the third inflow-cycle began in 2003 as soon as international financial markets felt reassured by the surprisingly neo-liberal orientation of President Lula’s government; this cycle intensified in 2004 with the beginning of a (purely speculative) commodity price-boom, and actually strengthened after a brief interlude following the 2008 global financial crash — and at the time of writing (mid-2011) this cycle is still unfolding, although already showing considerable signs of distress. The main aim of this paper is to analyse the financial crises resulting from this second cycle (both in LA and in East Asia) from the perspective of Keynesian/ Minskyian/ Kindlebergian financial economics. I will attempt to show that no matter how diversely these newly financially liberalised Developing Countries tried to deal with the absorption problem created by the subsequent surges of inflow (and they did follow different routes), they invariably ended up in a major crisis. As a result (and despite the insistence of mainstream analysis), these financial crises took place mostly due to factors that were intrinsic (or inherent) to the workings of over-liquid and under-regulated financial markets — and as such, they were both fully deserved and fairly predictable. Furthermore, these crises point not just to major market failures, but to a systemic market failure: evidence suggests that these crises were the spontaneous outcome of actions by utility-maximising agents, freely operating in friendly (‘light-touch’) regulated, over-liquid financial markets. That is, these crises are clear examples that financial markets can be driven by buyers who take little notice of underlying values — i.e., by investors who have incentives to interpret information in a biased fashion in a systematic way. Thus, ‘fat tails’ also occurred because under these circumstances there is a high likelihood of self-made disastrous events. In other words, markets are not always right — indeed, in the case of financial markets they can be seriously wrong as a whole. Also, as the recent collapse of ‘MF Global’ indicates, the capacity of ‘utility-maximising’ agents operating in (excessively) ‘friendly-regulated’ and over-liquid financial market to learn from previous mistakes seems rather limited.
Resumo:
Latin America has recently experienced three cycles of capital inflows, the first two ending in major financial crises. The first took place between 1973 and the 1982 ‘debt-crisis’. The second took place between the 1989 ‘Brady bonds’ agreement (and the beginning of the economic reforms and financial liberalisation that followed) and the Argentinian 2001/2002 crisis, and ended up with four major crises (as well as the 1997 one in East Asia) — Mexico (1994), Brazil (1999), and two in Argentina (1995 and 2001/2). Finally, the third inflow-cycle began in 2003 as soon as international financial markets felt reassured by the surprisingly neo-liberal orientation of President Lula’s government; this cycle intensified in 2004 with the beginning of a (purely speculative) commodity price-boom, and actually strengthened after a brief interlude following the 2008 global financial crash — and at the time of writing (mid-2011) this cycle is still unfolding, although already showing considerable signs of distress. The main aim of this paper is to analyse the financial crises resulting from this second cycle (both in LA and in East Asia) from the perspective of Keynesian/ Minskyian/ Kindlebergian financial economics. I will attempt to show that no matter how diversely these newly financially liberalised Developing Countries tried to deal with the absorption problem created by the subsequent surges of inflow (and they did follow different routes), they invariably ended up in a major crisis. As a result (and despite the insistence of mainstream analysis), these financial crises took place mostly due to factors that were intrinsic (or inherent) to the workings of over-liquid and under-regulated financial markets — and as such, they were both fully deserved and fairly predictable. Furthermore, these crises point not just to major market failures, but to a systemic market failure: evidence suggests that these crises were the spontaneous outcome of actions by utility-maximising agents, freely operating in friendly (light-touched) regulated, over-liquid financial markets. That is, these crises are clear examples that financial markets can be driven by buyers who take little notice of underlying values — investors have incentives to interpret information in a biased fashion in a systematic way. ‘Fat tails’ also occurred because under these circumstances there is a high likelihood of self-made disastrous events. In other words, markets are not always right — indeed, in the case of financial markets they can be seriously wrong as a whole. Also, as the recent collapse of ‘MF Global’ indicates, the capacity of ‘utility-maximising’ agents operating in unregulated and over-liquid financial market to learn from previous mistakes seems rather limited.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We study curves of genus 3 over algebraically closed fields of characteristic 2 with the canonical theta characteristic totally supported in one point. We compute the moduli dimension of such curves and focus on some of them which have two Weierstrass points with Weierstrass directions towards the support of the theta characteristic. We answer questions related to order sequence and Weierstrass weight of Weierstrass points and the existence of other Weierstrass points with similar properties.