874 resultados para finance-based schemes
Resumo:
The goal of this thesis is to apply the computational approach to motor learning, i.e., describe the constraints that enable performance improvement with experience and also the constraints that must be satisfied by a motor learning system, describe what is being computed in order to achieve learning, and why it is being computed. The particular tasks used to assess motor learning are loaded and unloaded free arm movement, and the thesis includes work on rigid body load estimation, arm model estimation, optimal filtering for model parameter estimation, and trajectory learning from practice. Learning algorithms have been developed and implemented in the context of robot arm control. The thesis demonstrates some of the roles of knowledge in learning. Powerful generalizations can be made on the basis of knowledge of system structure, as is demonstrated in the load and arm model estimation algorithms. Improving the performance of parameter estimation algorithms used in learning involves knowledge of the measurement noise characteristics, as is shown in the derivation of optimal filters. Using trajectory errors to correct commands requires knowledge of how command errors are transformed into performance errors, i.e., an accurate model of the dynamics of the controlled system, as is demonstrated in the trajectory learning work. The performance demonstrated by the algorithms developed in this thesis should be compared with algorithms that use less knowledge, such as table based schemes to learn arm dynamics, previous single trajectory learning algorithms, and much of traditional adaptive control.
Resumo:
We show that the use of probabilistic noiseless amplification in entangled coherent state-based schemes for the test of quantum nonlocality provides substantial advantages. The threshold amplitude to falsify a Bell-CHSH nonlocality test, in fact, is significantly reduced when amplification is embedded into the test itself. Such a beneficial effect holds also in the presence of detection inefficiency. Our study helps in affirming noiseless amplification as a valuable tool for coherent information processing and the generation of strongly nonclassical states of bosonic systems.
Resumo:
La financiación de los sistemas de salud en los países en desarrollo mediante esquemas de aseguramiento, presenta el desafío estructural de la informalidad de los mercados laborales. Ni el esquema de financiamiento comunitario ni el del subsidio a la oferta, parecen ofrecer una garantía de acceso a los grupos más vulnerables. Pero la extensión de esquemas de seguro subsidiado también implica mayores presiones sobre el gasto social. Este artículo es una revisión de la literatura sobre el tema, en el cual se revisan experiencias internacionales de los tipos mencionados, y se analiza su relevancia para Colombia.
Resumo:
Financial protection is one of the objectives of health systems, which protects poor households from falling into poverty as a result of health care related expenses. Expanding prepayment schemes to the poor is difficult in developing countries because labor is largely informal. Providing health care free-at-point-of-service does not adequately target spending on the poorest, but occupation- or community-based schemes have also inherent limitations to achieve universal coverage. Colombia adopted a government-subsidized health insurance scheme (SHI) strategy. The political debate about increasing SHI enrollment needs evidence about the effectiveness of this scheme regarding financial protection. This study runs a four-part model to estimate the effect of SHI on out-of-pocket expenses by the poor that are currently uninsured, if they were enrolled in the SHI. The results show a 43% and 50% reduction in expenses at Bogotá and national level respectively, which confirms the effectiveness of SHI as a financial protection tool.
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
Pós-graduação em Relações Internacionais (UNESP - UNICAMP - PUC-SP) - FFC
Resumo:
In Computer-Aided Diagnosis-based schemes in mammography analysis each module is interconnected, which directly affects the system operation as a whole. The identification of mammograms with and without masses is highly needed to reduce the false positive rates regarding the automatic selection of regions of interest for further image segmentation. This study aims to evaluate the performance of three techniques in classifying regions of interest as containing masses or without masses (without clinical findings), as well as the main contribution of this work is to introduce the Optimum-Path Forest (OPF) classifier in this context, which has never been done so far. Thus, we have compared OPF against with two sorts of neural networks in a private dataset composed by 120 images: Radial Basis Function and Multilayer Perceptron (MLP). Texture features have been used for such purpose, and the experiments have demonstrated that MLP networks have been slightly better than OPF, but the former is much faster, which can be a suitable tool for real-time recognition systems.
Resumo:
High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.
Resumo:
Arctic permafrost may be adversely affected by climate change in a number of ways, so that establishing a world-wide monitoring program seems imperative. This thesis evaluates possibilities for permafrost monitoring at the example of a permafrost site on Svalbard, Norway. An energy balance model for permafrost temperatures is developed that evaluates the different components of the surface energy budget in analogy to climate models. The surface energy budget, consisting of radiation components, sensible and latent heat fluxes as well as the ground heat flux, is measured over the course of one year, which has not been accomplished for arctic land areas so far. A considerable small-scale heterogeneity of the summer surface temperature is observed in long-term measurements with a thermal imaging system, which can be reproduced in the energy balance model. The model can also simulate the impact of different snow depths on the soil temperature, that has been documented in field measurements. Furthermore, time series of terrestrial surface temperature measurements are compared to satellite-borne measurements, for which a significant cold-bias is observed during winter. Finally, different possibilities for a world-wide monitoring scheme are assessed. Energy budget models can incorporate different satellite data sets as training data sets for parameter estimation, so that they may constitute an alternative to purely satellite-based schemes.
Resumo:
In this paper we present a FEC scheme based on simple LDGM codes to protect packetized multimedia streams. We demonstrate that simple LDGM codes working with a limited number of packets (small values of k) obtain recovery capabilities, against bursty packet losses, that are similar to those of other more complex FEC-based schemes designed for this type of channels.
Resumo:
The conventional control schemes applied to Shunt Active Power Filters (SAPF) are Harmonic extractor-based strategies (HEBSs) because their effectiveness depends on how quickly and accurately the harmonic components of the nonlinear loads are identified. The SAPF can be also implemented without the use of the load harmonic extractors. In this case, the harmonic compensating term is obtained from the system active power balance. These systems can be considered as balanced-energy-based schemes (BEBSs) and their performance depends on how fast the system reaches the equilibrium state. In this case, the phase currents of the power grid are indirectly regulated by double sequence controllers with two degrees of freedom, where the internal model principle is employed to avoid reference frame transformation. Additionally the DSC controller presents robustness when the SAPF is operating under unbalanced conditions. Furthermore, SAPF implemented without harmonic detection schemes compensate simultaneously harmonic distortion and reactive power of the load. Their compensation capabilities, however, are limited by the SAPF power converter rating. Such a restriction can be minimized if the level of the reactive power correction is managed. In this work an estimation scheme for determining the filter currents is introduced to manage the compensation of reactive power. Experimental results are shown for demonstrating the performance of the proposed SAPF system.
Resumo:
This paper studies the impact of in-phase and quadrature-phase imbalance (IQI) in two-way amplify-and-forward (AF) relaying systems. In particular, the effective signal-to-interference-plus-noise ratio (SINR) is derived for each source node, considering four different linear detection schemes, namely, uncompensated (Uncomp) scheme, maximal-ratio-combining (MRC), zero-forcing (ZF) and minimum mean-square error (MMSE) based schemes. For each proposed scheme, the outage probability (OP) is investigated over independent, non-identically distributed Nakagami-m fading channels, and exact closed-form expressions are derived for the first three schemes. Based on the closed-form OP expressions, an adaptive detection mode switching scheme is designed for minimizing the OP of both sources. An important observation is that, regardless of the channel conditions and transmit powers, the ZF-based scheme should always be selected if the target SINR is larger than 3 (4.77dB), while the MRC-based scheme should be avoided if the target SINR is larger than 0.38 (-4.20dB).
Resumo:
A strong designated verifier signature scheme makes it possible for a signer to convince a designated verifier that she has signed a message in such a way that the designated verifier cannot transfer the signature to a third party, and no third party can even verify the validity of a designated verifier signature. We show that anyone who intercepts one signature can verify subsequent signatures in Zhang-Mao ID-based designated verifier signature scheme and Lal-Verma ID-based designated verifier proxy signature scheme. We propose a new and efficient ID-based designated verifier signature scheme that is strong and unforgeable. As a direct corollary, we also get a new efficient ID-based designated verifier proxy signature scheme.