996 resultados para Uniform 3-factorization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uniform-price assignment games are introduced as those assignment markets with the core reduced to a segment. In these games, for all active agents, competitive prices are uniform although products may be non-homogeneous. A characterization in terms of the assignment matrix is given. The only assignment markets where all submarkets are uniform are the Bohm-Bawerk horse markets. We prove that for uniform-price assignment games the kernel, or set of symmetrically-pairwise bargained allocations, either coincides with the core or reduces to the nucleolus

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vertebral fracture assessments (VFAs) using dual-energy X-ray absorptiometry increase vertebral fracture detection in clinical practice and are highly reproducible. Measures of reproducibility are dependent on the frequency and distribution of the event. The aim of this study was to compare 2 reproducibility measures, reliability and agreement, in VFA readings in both a population-based and a clinical cohort. We measured agreement and reliability by uniform kappa and Cohen's kappa for vertebral reading and fracture identification: 360 VFAs from a population-based cohort and 85 from a clinical cohort. In the population-based cohort, 12% of vertebrae were unreadable. Vertebral fracture prevalence ranged from 3% to 4%. Inter-reader and intrareader reliability with Cohen's kappa was fair to good (0.35-0.71 and 0.36-0.74, respectively), with good inter-reader and intrareader agreement by uniform kappa (0.74-0.98 and 0.76-0.99, respectively). In the clinical cohort, 15% of vertebrae were unreadable, and vertebral fracture prevalence ranged from 7.6% to 8.1%. Inter-reader reliability was moderate to good (0.43-0.71), and the agreement was good (0.68-0.91). In clinical situations, the levels of reproducibility measured by the 2 kappa statistics are concordant, so that either could be used to measure agreement and reliability. However, if events are rare, as in a population-based cohort, we recommend evaluating reproducibility using the uniform kappa, as Cohen's kappa may be less accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In a high proportion of patients with favorable outcome after aneurysmal subarachnoid hemorrhage (aSAH), neuropsychological deficits, depression, anxiety, and fatigue are responsible for the inability to return to their regular premorbid life and pursue their professional careers. These problems often remain unrecognized, as no recommendations concerning a standardized comprehensive assessment have yet found entry into clinical routines. METHODS: To establish a nationwide standard concerning a comprehensive assessment after aSAH, representatives of all neuropsychological and neurosurgical departments of those eight Swiss centers treating acute aSAH have agreed on a common protocol. In addition, a battery of questionnaires and neuropsychological tests was selected, optimally suited to the deficits found most prevalent in aSAH patients that was available in different languages and standardized. RESULTS: We propose a baseline inpatient neuropsychological screening using the Montreal Cognitive Assessment (MoCA) between days 14 and 28 after aSAH. In an outpatient setting at 3 and 12 months after bleeding, we recommend a neuropsychological examination, testing all relevant domains including attention, speed of information processing, executive functions, verbal and visual learning/memory, language, visuo-perceptual abilities, and premorbid intelligence. In addition, a detailed assessment capturing anxiety, depression, fatigue, symptoms of frontal lobe affection, and quality of life should be performed. CONCLUSIONS: This standardized neuropsychological assessment will lead to a more comprehensive assessment of the patient, facilitate the detection and subsequent treatment of previously unrecognized but relevant impairments, and help to determine the incidence, characteristics, modifiable risk factors, and the clinical course of these impairments after aSAH.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development and validation of simple and selective analytical method for determination of 3.4-methylenedioxymethamphetamine (MDMA) in Ecstasy tablets, using high performance liquid chromatography with fluorescence detection. Analysis was performed in a reversed phase column (LiChrospher 100 C18, 150 x 4.6 mm, 5 µm), isocratic elution with phosphate buffer 25 mmol/L pH 3.0 and acetonitrile (95:5, v/v). The method presents adequate linearity, selectivity, precision and accuracy. MDMA concentration in analyzed tablets showed a remarkable variability (from 8.5 to 59.5 mg/tablet) although the tablet weights were uniform, indicating poor manufacturing control thus imposing additional health risks to the users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exercises and solutions in LaTex

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exercises and solutions in PDF

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laboratory determined mineral weathering rates need to be normalised to allow their extrapolation to natural systems. The principle normalisation terms used in the literature are mass, and geometric- and BET specific surface area (SSA). The purpose of this study was to determine how dissolution rates normalised to these terms vary with grain size. Different size fractions of anorthite and biotite ranging from 180-150 to 20-10 mu m were dissolved in pH 3, HCl at 25 degrees C in flow through reactors under far from equilibrium conditions. Steady state dissolution rates after 5376 h (anorthite) and 4992 h (biotite) were calculated from Si concentrations and were normalised to initial- and final- mass and geometric-, geometric edge- (biotite), and BET SSA. For anorthite, rates normalised to initial- and final-BET SSA ranged from 0.33 to 2.77 X 10(-10) mol(feldspar) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 5.74 to 8.88 X 10(-10) mol(feldspar) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.11 to 1.65 mol(feldspar) g(-1) s(-1). For biotite, rates normalised to initial- and final-BET SSA ranged from 1.02 to 2.03 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 3.26 to 16.21 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric edge SSA ranged from 59.46 to 111.32 x 10(-12) mol(biotite) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.81 to 6.93 X 10(-12) mol(biotite) g(-1) s(-1). For all normalising terms rates varied significantly (p <= 0.05) with grain size. The normalising terms which gave least variation in dissolution rate between grain sizes for anorthite were initial BET SSA and initial- and final-geometric SSA. This is consistent with: (1) dissolution being dominated by the slower dissolving but area dominant non-etched surfaces of the grains and, (2) the walls of etch pits and other dissolution features being relatively unreactive. These steady state normalised dissolution rates are likely to be constant with time. Normalisation to final BET SSA did not give constant ratios across grain size due to a non-uniform distribution of dissolution features. After dissolution coarser grains had a greater density of dissolution features with BET-measurable but unreactive wall surface area than the finer grains. The normalising term which gave the least variation in dissolution rates between grain sizes for biotite was initial BET SSA. Initial- and final-geometric edge SSA and final BET SSA gave the next least varied rates. The basal surfaces dissolved sufficiently rapidly to influence bulk dissolution rate and prevent geometric edge SSA normalised dissolution rates showing the least variation. Simple modelling indicated that biotite grain edges dissolved 71-132 times faster than basal surfaces. In this experiment, initial BET SSA best integrated the different areas and reactivities of the edge and basal surfaces of biotite. Steady state dissolution rates are likely to vary with time as dissolution alters the ratio of edge to basal surface area. Therefore they would be more properly termed pseudo-steady state rates, only appearing constant because the time period over which they were measured (1512 h) was less than the time period over wich they would change significantly. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dense deployments of wireless local area networks (WLANs) are fast becoming a permanent feature of all developed cities around the world. While this increases capacity and coverage, the problem of increased interference, which is exacerbated by the limited number of channels available, can severely degrade the performance of WLANs if an effective channel assignment scheme is not employed. In an earlier work, an asynchronous, distributed and dynamic channel assignment scheme has been proposed that (1) is simple to implement, (2) does not require any knowledge of the throughput function, and (3) allows asynchronous channel switching by each access point (AP). In this paper, we present extensive performance evaluation of this scheme when it is deployed in the more practical non-uniform and dynamic topology scenarios. Specifically, we investigate its effectiveness (1) when APs are deployed in a nonuniform fashion resulting in some APs suffering from higher levels of interference than others and (2) when APs are effectively switched `on/off' due to the availability/lack of traffic at different times, which creates a dynamically changing network topology. Simulation results based on actual WLAN topologies show that robust performance gains over other channel assignment schemes can still be achieved even in these realistic scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a finite difference scheme, with the TVD (total variation diminishing) property, for scalar conservation laws. The scheme applies to non-uniform meshes, allowing for variable mesh spacing, and is without upstream weighting. When applied to systems of conservation laws, no scalar decomposition is required, nor are any artificial tuning parameters, and this leads to an efficient, robust algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the non-cooperative interaction between two exporting countries producing differentiated products and one importing country when governments use optimal policies to maximize welfare. The analysis includes product differentiation, asymmetric costs, and Bertrand competition. For identical exporting countries we demonstrate that the importing country always prefers a uniform tariff regime while both exporting countries prefer a discriminatory tariff regime for any degree of product differentiation. If countries are asymmetric in terms of production cost then the higher-cost exporter always prefers the discriminatory regime but the lower-cost exporter prefers the uniform regime if there is a significant cost differential. With cost asymmetry the announcement of a uniform tariff regime by the importer is not a credible strategy since there is an incentive to deviate to discrimination. This implies an international body can play a role in ensuring that tariff agreements are respected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a Lévy process ξ=(ξt)t≥0 drifting to −∞, we define the so-called exponential functional as follows: Formula Under mild conditions on ξ, we show that the following factorization of exponential functionals: Formula holds, where × stands for the product of independent random variables, H− is the descending ladder height process of ξ and Y is a spectrally positive Lévy process with a negative mean constructed from its ascending ladder height process. As a by-product, we generate an integral or power series representation for the law of Iξ for a large class of Lévy processes with two-sided jumps and also derive some new distributional properties. The proof of our main result relies on a fine Markovian study of a class of generalized Ornstein–Uhlenbeck processes, which is itself of independent interest. We use and refine an alternative approach of studying the stationary measure of a Markov process which avoids some technicalities and difficulties that appear in the classical method of employing the generator of the dual Markov process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quantitative effects of uniform strain and background rotation on the stability of a strip of constant vorticity (a simple shear layer) are examined. The thickness of the strip decreases in time under the strain, so it is necessary to formulate the linear stability analysis for a time-dependent basic flow. The results show that even a strain rate γ (scaled with the vorticity of the strip) as small as 0.25 suppresses the conventional Rayleigh shear instability mechanism, in the sense that the r.m.s. wave steepness cannot amplify by more than a certain factor, and must eventually decay. For γ < 0.25 the amplification factor increases as γ decreases; however, it is only 3 when γ e 0.065. Numerical simulations confirm the predictions of linear theory at small steepness and predict a threshold value necessary for the formation of coherent vortices. The results help to explain the impression from numerous simulations of two-dimensional turbulence reported in the literature that filaments of vorticity infrequently roll up into vortices. The stabilization effect may be expected to extend to two- and three-dimensional quasi-geostrophic flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vector field formulation based on the Poisson theorem allows an automatic determination of rock physical properties (magnetization to density ratio-MDR-and the magnetization inclination-MI) from combined processing of gravity and magnetic geophysical data. The basic assumptions (i.e., Poisson conditions) are: that gravity and magnetic fields share common sources, and that these sources have a uniform magnetization direction and MDR. In addition, the previously existing formulation was restricted to profile data, and assumed sufficiently elongated (2-D) sources. For sources that violate Poisson conditions or have a 3-D geometry, the apparent values of MDR and MI that are generated in this way have an unclear relationship to the actual properties in the subsurface. We present Fortran programs that estimate MDR and MI values for 3-D sources through processing of gridded gravity and magnetic data. Tests with simple geophysical models indicate that magnetization polarity can be successfully recovered by MDR-MI processing, even in cases where juxtaposed bodies cannot be clearly distinguished on the basis of anomaly data. These results may be useful in crustal studies, especially in mapping magnetization polarity from marine-based gravity and magnetic data. (c) 2007 Elsevier Ltd. All rights reserved.