978 resultados para Optimal Linear Codes
Resumo:
Optimal and finite positive operator valued measurements on a finite number N of identically prepared systems have recently been presented. With physical realization in mind, we propose here optimal and minimal generalized quantum measurements for two-level systems. We explicitly construct them up to N = 7 and verify that they are minimal up to N = 5.
Resumo:
Quantum states can be used to encode the information contained in a direction, i.e., in a unit vector. We present the best encoding procedure when the quantum state is made up of N spins (qubits). We find that the quality of this optimal procedure, which we quantify in terms of the fidelity, depends solely on the dimension of the encoding space. We also investigate the use of spatial rotations on a quantum state, which provide a natural and less demanding encoding. In this case we prove that the fidelity is directly related to the largest zeros of the Legendre and Jacobi polynomials. We also discuss our results in terms of the information gain.
Resumo:
The Technologies setting at Agricultural production system have the main characteristics the vertical productivity, reduced costs, soil physical, chemical and biological improvement to promote production sustainable growth. Thus, the study aimed to determine the variability and the linear and special correlations between the plant and soil attributes in order to select and indicate good representation of soil physical quality for forage productivity. In the growing season of 2006, on the Fazenda Bonança in Pereira Barreto (SP), the productivity of autumn corn forage (FDM) in an irrigated no-tillage system and the soil physical properties were analyzed. The purpose was to study the variability and the linear and spatial correlations between the plant and soil properties, to select an indicator of soil physical quality related to corn forage yield. A geostatistical grid was installed to collect soil and plant data, with 125 sampling points in an area of 2,500 m². The results show that the studied properties did not vary randomly and that data variability was low to very high, with well-defined spatial patterns, ranging from 7.8 to 38.0 m. On the other hand, the linear correlation between the plant and the soil properties was low and highly significant. The pairs forage dry matter versus microporosity and stem diameter versus bulk density were best correlated in the 0-0.10 m layer, while the other pairs - forage dry matter versus macro - and total porosity - were inversely correlated in the same layer. However, from the spatial point of view, there was a high inverse correlation between forage dry matter with microporosity, so that microporosity in the 0-0.10 m layer can be considered a good indicator of soil physical quality, with a view to corn forage yield.
Resumo:
Kinematic functional evaluation with body-worn sensors provides discriminative and responsive scores after shoulder surgery, but the optimal movements' combination has not yet been scientifically investigated. The aim of this study was the development of a simplified shoulder function kinematic score including only essential movements. The P Score, a seven-movement kinematic score developed on 31 healthy participants and 35 patients before surgery and at 3, 6 and 12 months after shoulder surgery, served as a reference.Principal component analysis and multiple regression were used to create simplified scoring models. The candidate models were compared to the reference score. ROC curve for shoulder pathology detection and correlations with clinical questionnaires were calculated.The B-B Score (hand to the Back and hand upwards as to change a Bulb) showed no difference to the P Score in time*score interaction (P > .05) and its relation with the reference score was highly linear (R(2) > .97). Absolute value of correlations with clinical questionnaires ranged from 0.51 to 0.77. Sensitivity was 97% and specificity 94%.The B-B and reference scores are equivalent for the measurement of group responses. The validated simplified scoring model presents practical advantages that facilitate the objective evaluation of shoulder function in clinical practice.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
The correct use of closed field chambers to determine N2O emissions requires defining the time of day that best represents the daily mean N2O flux. A short-term field experiment was carried out on a Mollisol soil, on which annual crops were grown under no-till management in the Pampa Ondulada of Argentina. The N2O emission rates were measured every 3 h for three consecutive days. Fluxes ranged from 62.58 to 145.99 ∝g N-N2O m-2 h-1 (average of five field chambers) and were negatively related (R² = 0.34, p < 0.01) to topsoil temperature (14 - 20 ºC). N2O emission rates measured between 9:00 and 12:00 am presented a high relationship to daily mean N2O flux (R² = 0.87, p < 0.01), showing that, in the study region, sampling in the mornings is preferable for GHG.
Resumo:
Laser-induced forward transfer (LIFT) is a laser direct-write technique that offers the possibility of printing patterns with a high spatial resolution from a wide range of materials in a solid or liquid state, such as conductors, dielectrics, and biomolecules in solution. This versatility has made LIFT a very promising alternative to lithography-based processes for the rapid prototyping of biomolecule microarrays. Here, we study the transfer process through the LIFT of droplets of a solution suitable for microarray preparation. The laser pulse energy and beam size were systematically varied, and the effect on the transferred droplets was evaluated. Controlled transfers in which the deposited droplets displayed optimal features could be obtained by varying these parameters. In addition, the transferred droplet volume displayed a linear dependence on the laser pulse energy. This dependence allowed determining a threshold energy density value, independent of the laser focusing conditions, which acted as necessary conditions for the transfer to occur. The corresponding sufficient condition was given by a different total energy threshold for each laser beam dimension. The threshold energy density was found to be the dimensional parameter that determined the amount of the transferred liquid per laser pulse, and there was no substantial loss of material due to liquid vaporization during the transfer.
Resumo:
The problem of searchability in decentralized complex networks is of great importance in computer science, economy, and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and we obtain expressions for the average search cost both in the presence and the absence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: starlike configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when it is large.
Resumo:
BACKGROUND AND OBJECTIVES: The SBP values to be achieved by antihypertensive therapy in order to maximize reduction of cardiovascular outcomes are unknown; neither is it clear whether in patients with a previous cardiovascular event, the optimal values are lower than in the low-to-moderate risk hypertensive patients, or a more cautious blood pressure (BP) reduction should be obtained. Because of the uncertainty whether 'the lower the better' or the 'J-curve' hypothesis is correct, the European Society of Hypertension and the Chinese Hypertension League have promoted a randomized trial comparing antihypertensive treatment strategies aiming at three different SBP targets in hypertensive patients with a recent stroke or transient ischaemic attack. As the optimal level of low-density lipoprotein cholesterol (LDL-C) level is also unknown in these patients, LDL-C-lowering has been included in the design. PROTOCOL DESIGN: The European Society of Hypertension-Chinese Hypertension League Stroke in Hypertension Optimal Treatment trial is a prospective multinational, randomized trial with a 3 × 2 factorial design comparing: three different SBP targets (1, <145-135; 2, <135-125; 3, <125 mmHg); two different LDL-C targets (target A, 2.8-1.8; target B, <1.8 mmol/l). The trial is to be conducted on 7500 patients aged at least 65 years (2500 in Europe, 5000 in China) with hypertension and a stroke or transient ischaemic attack 1-6 months before randomization. Antihypertensive and statin treatments will be initiated or modified using suitable registered agents chosen by the investigators, in order to maintain patients within the randomized SBP and LDL-C windows. All patients will be followed up every 3 months for BP and every 6 months for LDL-C. Ambulatory BP will be measured yearly. OUTCOMES: Primary outcome is time to stroke (fatal and non-fatal). Important secondary outcomes are: time to first major cardiovascular event; cognitive decline (Montreal Cognitive Assessment) and dementia. All major outcomes will be adjudicated by committees blind to randomized allocation. A Data and Safety Monitoring Board has open access to data and can recommend trial interruption for safety. SAMPLE SIZE CALCULATION: It has been calculated that 925 patients would reach the primary outcome after a mean 4-year follow-up, and this should provide at least 80% power to detect a 25% stroke difference between SBP targets and a 20% difference between LDL-C targets.