893 resultados para Group theoretical based techniques
Resumo:
Exchange rate misalignment assessment is becoming more relevant in recent period particularly after the nancial crisis of 2008. There are di erent methodologies to address real exchange rate misalignment. The real exchange misalignment is de ned as the di erence between actual real e ective exchange rate and some equilibrium norm. Di erent norms are available in the literature. Our paper aims to contribute to the literature by showing that Behavioral Equilibrium Exchange Rate approach (BEER) adopted by Clark & MacDonald (1999), Ubide et al. (1999), Faruqee (1994), Aguirre & Calderón (2005) and Kubota (2009) among others can be improved in two following manners. The rst one consists of jointly modeling real e ective exchange rate, trade balance and net foreign asset position. The second one has to do with the possibility of explicitly testing over identifying restrictions implied by economic theory and allowing the analyst to show that these restrictions are not falsi ed by the empirical evidence. If the economic based identifying restrictions are not rejected it is also possible to decompose exchange rate misalignment in two pieces, one related to long run fundamentals of exchange rate and the other related to external account imbalances. We also discuss some necessary conditions that should be satis ed for disrcarding trade balance information without compromising exchange rate misalignment assessment. A statistical (but not a theoretical) identifying strategy for calculating exchange rate misalignment is also discussed. We illustrate the advantages of our approach by analyzing the Brazilian case. We show that the traditional approach disregard important information of external accounts equilibrium for this economy.
Resumo:
Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Novel Imaging-Based Techniques Reveal a Role for PD-1/PD-L1 in Tumor Immune Surveillance in the Lung
Resumo:
The binding of immune inhibitory receptor Programmed Death 1 (PD-1) on T cells to its ligand PD-L1 has been implicated as a major contributor to tumor induced immune suppression. Clinical trials of PD-L1 blockade have proven effective in unleashing therapeutic anti-tumor immune responses in a subset of patients with advanced melanoma, yet current response rates are low for reasons that remain unclear. Hypothesizing that the PD-1/PD-L1 pathway regulates T cell surveillance within the tumor microenvironment, we employed intravital microscopy to investigate the in vivo impact of PD-L1 blocking antibody upon tumor-associated immune cell migration. However, current analytical methods of intravital dynamic microscopy data lack the ability to identify cellular targets of T cell interactions in vivo, a crucial means for discovering which interactions are modulated by therapeutic intervention. By developing novel imaging techniques that allowed us to better analyze tumor progression and T cell dynamics in the microenvironment; we were able to explore the impact of PD-L1 blockade upon the migratory properties of tumor-associated immune cells, including T cells and antigen presenting cells, in lung tumor progression. Our results demonstrate that early changes in tumor morphology may be indicative of responsiveness to anti-PD-L1 therapy. We show that immune cells in the tumor microenvironment as well as tumors themselves express PD-L1, but immune phenotype alone is not a predictive marker of effective anti-tumor responses. Through a novel method in which we quantify T cell interactions, we show that T cells are largely engaged in interactions with dendritic cells in the tumor microenvironment. Additionally, we show that during PD-L1 blockade, non-activated T cells are recruited in greater numbers into the tumor microenvironment and engage more preferentially with dendritic cells. We further show that during PD-L1 blockade, activated T cells engage in more confined, immune synapse-like interactions with dendritic cells, as opposed to more dynamic, kinapse-like interactions with dendritic cells when PD-L1 is free to bind its receptor. By advancing the contextual analysis of anti-tumor immune surveillance in vivo, this study implicates the interaction between T cells and tumor-associated dendritic cells as a possible modulator in targeting PD-L1 for anti-tumor immunotherapy.
Resumo:
Una de las principales causas del ruido en nuestras ciudades es el tráfico rodado. El ruido generado por los vehículos no es sólo debido al motor, sino que existen diversas fuentes de ruido en los mismos, entre las que se puede destacar el ruido de rodadura. Para localizar las causas del ruido e identificar las principales fuentes del mismo se han utilizado en diversos estudios las técnicas de coherencia y las técnicas basadas en arrays. Sin embargo, en la bibliografía existente, no es habitual encontrar el uso de estas técnicas en el sector automovilístico. En esta tesis se parte de la premisa de la posibilidad de usar estas técnicas de medida en coches, para demostrar a la largo de la misma su factibilidad y su bondad para evaluar las fuentes de ruido en dos condiciones distintas: cuando el coche está parado y cuando está en movimiento. Como técnica de coherencia se elige la de Intensidad Selectiva, utilizándose la misma para evaluar la coherencia existente entre el ruido que llega a los oídos del conductor y la intensidad radiada por distintos puntos del motor. Para la localización de fuentes de ruido, las técnicas basadas en array son las que mejores resultados ofrecen. Statistically Optimized Near-field Acoustical Holography (SONAH) es la técnica elegida para la localización y caracterización de las fuentes de ruido en el motor a baja frecuencia. En cambio, Beamforming es la técnica seleccionada para el caso de media-alta frecuencia y para la evaluación de las fuentes de ruido cuando el coche se encuentra en movimiento. Las técnicas propuestas no sólo pueden utilizarse en medidas reales, sino que además proporcionan abundante información y frecen una gran versatilidad a la hora de caracterizar fuentes de ruido. ABSTRACT One of the most important noise causes in our cities is the traffic. The noise generated by the vehicles is not only due to the engine, but there are some other noise sources. Among them the tyre/road noise can be highlighted. Coherence and array based techniques have been used in some research to locate the noise causes and identify the main noise sources. Nevertheless, it is not usual in the literature to find the application of this kind of techniques in the car sector. This Thesis starts taking into account the possibility of using this kind of measurement techniques in cars, to demonstrate their feasability and their quality to evaluate the noise sources under two different conditions: when the car is stopped and when it is in movement. Selective Intensity was chosen as coherence technique, evaluating the coherence between the noise in the driver’s ears and the intensity radiated in different points of the engine. Array based techniques carry out the best results to noise source location. Statistically Optimized Near-field Acoustical Holography (SONAH) is the measurement technique chosen for noise source location and characterization in the engine at low frequency. On the other hand, Beamforming is the technique chosen in the case of medium-high frequency and to characterize the noise sources when the car is in movement. The proposed techniques not only can be used in actual measurements, but also provide a lot of information and are very versatile to noise source characterization.
Resumo:
The influence of the sample introduction system on the signals obtained with different tin compounds in inductively coupled plasma (ICP) based techniques, i.e., ICP atomic emission spectrometry (ICP–AES) and ICP mass spectrometry (ICP–MS) has been studied. Signals for test solutions prepared from four different tin compounds (i.e., tin tetrachloride, monobutyltin, dibutyltin and di-tert-butyltin) in different solvents (methanol 0.8% (w/w), i-propanol 0.8% (w/w) and various acid matrices) have been measured by ICP–AES and ICP–MS. The results demonstrate a noticeable influence of the volatility of the tin compounds on their signals measured with both techniques. Thus, in agreement with the compound volatility, the highest signals are obtained for tin tetrachloride followed by di-tert-butyltin/monobutyltin and dibutyltin. The sample introduction system exerts an important effect on the amount of solution loading the plasma and, hence, on the relative signals afforded by the tin compounds in ICP–based techniques. Thus, when working with a pneumatic concentric nebulizer, the use of spray chambers affording high solvent transport efficiency to the plasma (such as cyclonic and single pass) or high spray chamber temperatures is recommended to minimize the influence of the tin chemical compound. Nevertheless, even when using the conventional pneumatic nebulizer coupled to the best spray chamber design (i.e., a single pass spray chamber), signals obtained for di-tert-butyltin/monobutyltin and dibutyltin are still around 10% and 30% lower than the corresponding signal for tin tetrachloride, respectively. When operating with a pneumatic microconcentric nebulizer coupled to a 50 °C-thermostated cinnabar spray chamber, all studied organotin compounds provided similar emission signals although about 60% lower than those obtained for tin tetrachloride. The use of an ultrasonic nebulizer coupled to a desolvation device provides the largest differences in the emission signals, among all tested systems.
Resumo:
The elemental analysis of Spanish palm dates by inductively coupled plasma atomic emission spectrometry and inductively coupled plasma mass spectrometry is reported for the first time. To complete the information about the mineral composition of the samples, C, H, and N are determined by elemental analysis. Dates from Israel, Tunisia, Saudi Arabia, Algeria and Iran have also been analyzed. The elemental composition have been used in multivariate statistical analysis to discriminate the dates according to its geographical origin. A total of 23 elements (As, Ba, C, Ca, Cd, Co, Cr, Cu, Fe, H, In, K, Li, Mg, Mn, N, Na, Ni, Pb, Se, Sr, V, and Zn) at concentrations from major to ultra-trace levels have been determined in 13 date samples (flesh and seeds). A careful inspection of the results indicate that Spanish samples show higher concentrations of Cd, Co, Cr, and Ni than the remaining ones. Multivariate statistical analysis of the obtained results, both in flesh and seed, indicate that the proposed approach can be successfully applied to discriminate the Spanish date samples from the rest of the samples tested.
Resumo:
"September 1987."
Resumo:
We present a group theoretical analysis of several classes of organic superconductor. We predict that highly frustrated organic superconductors, such as K-(ET)(2)Cu-2(CN)(3) (where ET is BEDT-TTF, bis(ethylenedithio) tetrathiafulvalene) and beta'-[Pd(dmit)(2)](2)X, undergo two superconducting phase transitions, the first from the normal state to a d-wave superconductor and the second to a d + id state. We show that the monoclinic distortion of K-(ET)(2)Cu(NCS)(2) means that the symmetry of its superconducting order parameter is different from that of orthorhombic-K-(ET)(2)Cu[N(CN)(2)] Br. We propose that beta'' and theta phase organic superconductors have d(xy) + s order parameters.
Resumo:
In this chapter we present the relevant mathematical background to address two well defined signal and image processing problems. Namely, the problem of structured noise filtering and the problem of interpolation of missing data. The former is addressed by recourse to oblique projection based techniques whilst the latter, which can be considered equivalent to impulsive noise filtering, is tackled by appropriate interpolation methods.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
The electrochemical behavior of the interaction of amodiaquine with DNA on a carbon paste electrode was studied using voltametric techniques. In an acid medium, an electroactive adduct is formed when amodiaquine interacts with DNA. The anodic peak is dependent on pH, scan rate and the concentration of the pharmaceutical. Adduct formation is irreversible in nature, and preferentially occurs by interaction of the amodiaquine with the guanine group. Theoretical calculations for optimization of geometry, and DFT analyses and on the electrostatic potential map (EPM), were used in the investigation of adduct formation between amodiaquine and DNA.
Resumo:
Darrerament, l'interès pel desenvolupament d'aplicacions amb robots submarins autònoms (AUV) ha crescut de forma considerable. Els AUVs són atractius gràcies al seu tamany i el fet que no necessiten un operador humà per pilotar-los. Tot i això, és impossible comparar, en termes d'eficiència i flexibilitat, l'habilitat d'un pilot humà amb les escasses capacitats operatives que ofereixen els AUVs actuals. L'utilització de AUVs per cobrir grans àrees implica resoldre problemes complexos, especialment si es desitja que el nostre robot reaccioni en temps real a canvis sobtats en les condicions de treball. Per aquestes raons, el desenvolupament de sistemes de control autònom amb l'objectiu de millorar aquestes capacitats ha esdevingut una prioritat. Aquesta tesi tracta sobre el problema de la presa de decisions utilizant AUVs. El treball presentat es centra en l'estudi, disseny i aplicació de comportaments per a AUVs utilitzant tècniques d'aprenentatge per reforç (RL). La contribució principal d'aquesta tesi consisteix en l'aplicació de diverses tècniques de RL per tal de millorar l'autonomia dels robots submarins, amb l'objectiu final de demostrar la viabilitat d'aquests algoritmes per aprendre tasques submarines autònomes en temps real. En RL, el robot intenta maximitzar un reforç escalar obtingut com a conseqüència de la seva interacció amb l'entorn. L'objectiu és trobar una política òptima que relaciona tots els estats possibles amb les accions a executar per a cada estat que maximitzen la suma de reforços totals. Així, aquesta tesi investiga principalment dues tipologies d'algoritmes basats en RL: mètodes basats en funcions de valor (VF) i mètodes basats en el gradient (PG). Els resultats experimentals finals mostren el robot submarí Ictineu en una tasca autònoma real de seguiment de cables submarins. Per portar-la a terme, s'ha dissenyat un algoritme anomenat mètode d'Actor i Crític (AC), fruit de la fusió de mètodes VF amb tècniques de PG.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)