992 resultados para Free markets
Resumo:
Summary
Resumo:
The debate about Free Will has been in the human mind for centuries, but has become even more intense with the recent scientific findings adding new lights on the problem. This interdisciplinary explosion of interest for the topic has brought many insightful knowledge, but also a great deal of epistemological problems. We think that those epistemological problems are deeply related to the very definition of Free Will and how this definition interacts with the interpretations of experimental results. We will thus outline a few of these problems and then propose a definition of Free Will which takes into account those epistemological pitfalls.
Resumo:
A simple expression for the Gibbs free energy of formation of a pure component or a eutectic alloy glass, relative to the stable crystalline phase (or phases) at the same temperature is deduced by use of thermodynamic arguments. The expression obtained is supposed to apply to both monocomponent and multicomponent liquid alloys that might become glasses from the supercooled liquid state, irrespective of the critical cooling rate needed to avoid crystallization.
Resumo:
Caprine and ovine IgA were identified by cross-reaction with anti-human and anti-bovine IgA sera in colostrum, mature milk, saliva, urine and serum. Secretory component (SC) was shown in the free form and associated with polymeric serum IgA in secretions. Mean molecular weights were determined for the IgA and the free secretory components. The high IgA content of saliva suggested that it was a major secretory immunoglobulin in these species. Traces of secretory IgA were also found in normal sera but most of the serum IgA had no secretory determinant. Secretory IgA, serum IgA and free secretory component were purified. Levels of the sheep and goat immunoglobulins were measured in various fluids.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Background: The anti-angiogenic drug, bevacizumab (Bv), is currently used in the treatment of different malignancies including breast cancer. Many angiogenesis-associated molecules are found in the circulation of cancer patients. Until now, there are no prognostic or predictive factors identified in breast cancer patients treated with Bv. We present here the first results of the prospective monitoring of 6 angiogenesis-related molecules in the peripheral blood of breast cancer patients treated with a combination of Bv and PLD in the phase II trial, SAKK 24/06. Methods: Patients were treated with PLD (20 mg/m2) and Bv (10 mg/kg) on days 1 and 15 of each 4-week cycle for a maximum of 6 cycles, followed by Bv monotherapy maintenance (10 mg/m2 q2 weeks) until progression or severe toxicity. Plasma and serum samples were collected at baseline, after 2 months of therapy, then every 3 months and at treatment discontinuation. Enzyme-linked immunosorbent assays (Quantikine, R&D Systems and Reliatech) were used to measure the expression levels of human vascular endothelial growth factor (hVEGF), placental growth factor (hPlGF), matrix metalloproteinase 9 (hMMP9) and soluble VEGF receptors hsVEGFR-1, hsVEGFR-2 and hsVEGFR-3. The log-transformed data (to reduce the skewness) for each marker was analyzed using an analysis of variance (ANOVA) model to determine if there was a difference between the mean of the subgroups of interest (where α = 0.05). The untransformed data was also analyzed in the same manner as a "sensitivity" check. Results: 132 blood samples were collected in 41 out of 43 enrolled patients. Baseline levels of the molecules were compared to disease status according to RECIST. There was a statistically significant difference in the mean of the log-transformed levels of hMMP9 between responders [CR+PR] versus the mean in patients with PD (p-value=0.0004, log fold change=0.7536), and between patients with disease control [CR+PR+SD] and those with PD (p-value=<0.0001, log fold change=0.81559), with the log-transformed level of hMMP9 being higher for the responder group. The mean of the log-transformed levels of hsVEGFR-1 was statistically significantly different between patients with disease control [CR+PR+SD] and those with PD (p-value=0.0068, log fold change=-0.6089), where the log-transformed level of hsVEGFR-1 was lower for the responder group. The log-transformed level of hMMP9 at baseline was identified as a significant prognostic factor in terms of progression free survival (PFS): p-value=0.0417, hazard ratio (HR)=0.574 with a corresponding 95% confidence interval (0.336 - 0.979)). No strong correlation was shown either between the log-transformed levels of hsVEGF, hPlGF, hsVEGFR-2 or hsVEGFR-3 and clinical response or the occurrence of severe toxicity, or between the levels of the different molecules. Conclusions: Our results suggest that baseline plasma level of the matrix metalloproteinase, hMMP9, could predict tumor response and PFS in patients treated with a combination of Bv and PLD. These data justify further investigation in breast cancer patients treated with anti-angiogenic therapy.
Resumo:
Non-Hodgkin's lymphoma (NHL) comprises both indolent forms, including follicular lymphoma (FL) and marginal zone lymphoma (MZL), and aggressive forms, including diffuse large B-cell lymphoma (DLBCL) and mantle cell lymphoma (MCL). FL and DLBCL are the most common subtypes of indolent and aggressive NHL, respectively. Although these lymphomas exhibit different clinical behaviors and outcomes, the prognosis is negatively affected in both DLBCL and FL by the lack of a complete response (CR) with standard treatment options. The aim of therapy should therefore be achievement of a CR, which is not only associated with longer progression-free survival (PFS) and overall survival times, but is also a prerequisite for a cure, particularly in DLBCL. Consolidation treatment with radioimmunotherapy (RIT) is an innovative treatment approach to increase CR rates. Phase II studies have indicated promising results with yttrium-90 ((90)Y)-ibritumomab tiuxetan and iodine-131 ((131)I)-tositumomab as consolidation following induction therapy for previously untreated patients with advanced FL. More recently, investigators reported a marked increase in CR rates and significant improvements in PFS using standard chemotherapy regimens followed by (90)Y-ibritumomab tiuxetan in a phase III randomized trial in patients with previously untreated FL. Data also suggest that RIT may play a role in the treatment of high-risk DLBCL, with encouraging PFS results from a phase II trial of (90)Y-ibritumomab tiuxetan consolidation following induction with rituximab plus chemotherapy in elderly patients with previously untreated DLBCL. With the higher CR rates and longer PFS times observed in patients with FL and DLBCL, as well as encouraging early data from MZL and MCL consolidation trials, RIT appears to have an important role in the treatment of patients with NHL.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
The objective of this study was to investigate the relationship between the physical activity (PA) and its related variables under confinement and in free-living conditions in Asian individuals, where no such information presently exists. The subjects consisted of eighty-six Japanese individuals with a mean age of 38+/-12 years. Under confinement in a large respiratory chamber, the energy expenditure (EE) was measured for 24h. In addition, two moderate walking exercises of 30 min each on a horizontal treadmill were assigned. Free-living measurements of 7 days were also performed using a validated accelerometer. The PA level in the chamber (1.47+/-0.11), expressed as a multiple of the basal EE, was lower than that in free-living conditions (1.53+/-0.12) (p<0.001). However, the two values were closely correlated (r=0.744, p<0.001). Conversely, a residual analysis showed a wide variation in the mean difference for both conditions and revealed a significant systematic error (r=-0.548, p<0.001), thus indicating an increased gap with increasing PA levels in free-living conditions. Similar results were obtained following the exclusion of the imposed exercise sessions. In contrast, the daily step counts under both conditions did not show any correlation. The PA level in the chamber (including and excluding imposed walking exercises) is compatible with the PA level in free-living conditions at the group level, although the daily step counts are unrelated. Thus, the PA level in the chamber may provide valuable information to help us achieve a better understanding of human PA in daily life as it is related to behavioral research.
Resumo:
A laser-based technique for printing transparent and weakly absorbing liquids is developed. Its principle of operation relies in the tight focusing of short laser pulses inside the liquid and close to its free surface, in such a way that the laser radiation is absorbed in a tiny volume around the beam waist, with practically no absorption in any other location along the beam path. If the absorbed energy overcomes the optical breakdown threshold, a cavitation bubble is generated, and its expansion results in the propulsion of a small fraction of liquid which can be collected on a substrate, leading to the printing of a microdroplet for each laser pulse. The technique does not require the preparation of the liquid in thin film form, and its forward mode of operation imposes no restriction concerning the optical properties of the substrate. These characteristics make it well suited for printing a wide variety of materials of interest in diverse applications. We demonstrate that the film-free laser forward printing technique is capable of printing microdroplets with good resolution, reproducibility and control, and analyze the influence of the main process parameter, laser pulse energy. The mechanisms of liquid printing are also investigated: time-resolved imaging provides a clear picture of the dynamics of liquid transfer which allows understanding the main features observed in the printed droplets.
Resumo:
RATIONALE AND OBJECTIVES: The purpose of this study was the investigation of the impact of real-time adaptive motion correction on image quality in navigator-gated, free-breathing, double-oblique three-dimensional (3D) submillimeter right coronary magnetic resonance angiography (MRA). MATERIALS AND METHODS: Free-breathing 3D right coronary MRA with real-time navigator technology was performed in 10 healthy adult subjects with an in-plane spatial resolution of 700 x 700 microm. Identical double-oblique coronary MR-angiograms were performed with navigator gating alone and combined navigator gating and real-time adaptive motion correction. Quantitative objective parameters of contrast-to-noise ratio (CNR) and vessel sharpness and subjective image quality scores were compared. RESULTS: Superior vessel sharpness, increased CNR, and superior image quality scores were found with combined navigator gating and real-time adaptive motion correction (vs. navigator gating alone; P < 0.01 for all comparisons). CONCLUSION: Real-time adaptive motion correction objectively and subjectively improves image quality in 3D navigator-gated free-breathing double-oblique submillimeter right coronary MRA.
Resumo:
We develop a method to obtain first-passage-time statistics for non-Markovian processes driven by dichotomous fluctuations. The fluctuations themselves need not be Markovian. We calculate analytic first-passage-time distributions and mean first-passage times for exponential, rectangular, and long-tail temporal distributions of the fluctuations.
Resumo:
We show that the solution published in the paper by Senovilla [Phys. Rev. Lett. 64, 2219 (1990)] is geodesically complete and singularity-free. We also prove that the solution satisfies the stronger energy and causality conditions, such as global hyperbolicity, the strong energy condition, causal symmetry, and causal stability. A detailed discussion about which assumptions in the singularity theorems are not satisfied is performed, and we show explicitly that the solution is in accordance with those theorems. A brief discussion of the results is given.