984 resultados para Prescribed mean-curvature problem


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Smuggling dissolved drugs, especially cocaine, in bottled liquids is a problem at borders nowadays. Common fluoroscopy of packages at the border cannot detect contaminated liquids. To find a dissolved drug, an immunological test using a drug-test panel has to be performed. This means that a control sample of the cargo must be opened to perform the test. As it is not possible to open all boxes, and as smugglers hide the drugcontaining boxes between regularly filled boxes, contaminated cargos can be overlooked. Investigators sometimes cannot perform the drug-test panel because they try not to arouse the smugglers' suspicion in order to follow the cargo and to find the recipient. Aims: The objective of our studies was to define non-invasive examination techniques to investigate cargos that are suspicions to contain dissolved cocaine without leaving traces on the samples. We examined vessels containing cocaine by radiological cross-section techniques such as multidetector computed tomography (MDCT) and magnetic resonance spectroscopy (MRS). Methods: In a previous study, we examined bottles of wine containing dissolved cocaine in different quantities using an MDCT unit. To distinguish between bottles containing red wine and those where cocaine was solved in the wine, cross sectional 2D-images have been reconstructed and the absorption of X-rays was quantified by measuring the mean density of the liquid inside the bottles. In our new study, we investigated phantoms containing cocaine dissolved in water with or without ethanol as well as cocaine dissolved in different sorts of commercially available wine by the use of a clinical magnetic resonance unit (3 tesla). To find out if dissolved cocaine could be detected, magnetic resonance spectroscopy (1H MRS) was performed. Results: By using a MDCT-unit and measuring the mean attenuation of X-rays, it is possible to distinguish weather substances are dissolved in a liquid or not, if a comparative liquid without any solutions is available. The increase of the mean density indicates the presence of dissolved substances without the possibility to identify the substance. By using magnetic resonance spectroscopy, dissolved cocaine can be clearly identified because it produces distinctive resonances in the spectrum. In contrast to MDCT, this technique shows a high sensitivity (detection of 1 mM cocaine in wine). Conclusions: Cross-sectional imaging techniques such as MDCT and MRS appropriated to examine cargos that are suspicious to contain dissolved cocaine. They allow to perform non-invasive investigations without leaving any trace on the cargo. While an MDCT scan can detect dissolved substances in liquids, identification of cocaine can be obtained by MR-spectroscopy. Acknowledgment: This work was supported by the Centre d'Imagerie BioMédicale (CIBM) of the University of Lausanne (UNIL), the Swiss Federal Institute of Technology Lausanne (EPFL), the University of Geneva (UniGe), the Centre Hospitalier Universitaire Vaudois (CHUV), the Hôpitaux Universitaire de Genève (HUG) and the Leenaards and the Jeantet Foundations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Swallowing difficulties, or dysphagia, can occur in anyage group, although it is most common among elderly people. It canaffect patients' ability to take solid oral dosage forms, thus compromisingmedication adherence. Although literature is poor, availabledata show that prevalence in the general population ranges from 25 to60%. Prevalence in community pharmacies needs to be explored.Materials & Methods Community pharmacies were recruited from arandom selection in three Swiss states: Basel-Stadt (BS), Basel-Landschaft (BL) and Lausanne (LA). Patients' ability to swallowsolid oral medications was enquired with a semi-structured interview;the interviewer spent 4 h in each included pharmacy. Each consecutivepatient (18 years and older) entering the pharmacy with aprescription for at least 3 different solid oral forms was enrolled.Study was approved by the Lausanne ethics committee.Results Sixty pharmacies took part in the study (20 in BS, 10 in BL,30 in LA) between March and May 2010. Patient inclusion rate was77.8% (410/527). Prevalence of swallowing disorders was 22.4% (92/410). Patients with swallowing disorders were older (mean age: 67.5± 16 years vs. 63.0 ± 14 years, range 19-96; p = 0.03) and moreoften women (69.6% vs. 59.1%; Chi2 = 3.3, p = 0.04) than patientswithout swallowing disorders. They had on average 4.6 ± 2.7 drugswith a mean number of 5.5 ± 3.3 tablets or capsules to take daily,which didn't differ from the number of drugs taken by patientswithout swallowing difficulties (4.9 ± 2.5 drugs and 5.9 ± 3.5 tablets;n.s.). The difficulty was mainly related to the big size (63%) orthe quality of pill coating (rough, sticky, 14%). Twenty-one patients(37.5%) stated that their swallowing disorders resulted in nonadherence, rated as rarely (12 patients), sometimes (6 patients), veryoften (1 patient) or always (2 patients). According to patients, nopharmacist and only 2 physicians enquired about patients' swallowingissue.Discussion & Conclusion Swallowing difficulties are frequent amongpatients in community pharmacies in Switzerland with an estimatedprevalence of 22%. The problem resulted in non adherence or partialadherence in at least 35% of these patients. However, pharmacists andphysicians did not routinely inquire about the disorder. Guidelinesshould be developed for promoting systematic approaches of patientsin community pharmacies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study theoretical and empirical aspects of the mean exit time (MET) of financial time series. The theoretical modeling is done within the framework of continuous time random walk. We empirically verify that the mean exit time follows a quadratic scaling law and it has associated a prefactor which is specific to the analyzed stock. We perform a series of statistical tests to determine which kind of correlation are responsible for this specificity. The main contribution is associated with the autocorrelation property of stock returns. We introduce and solve analytically both two-state and three-state Markov chain models. The analytical results obtained with the two-state Markov chain model allows us to obtain a data collapse of the 20 measured MET profiles in a single master curve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider mean first-passage times (MFPTs) for systems driven by non-Markov gamma and McFadden dichotomous noises. A simplified derivation is given of the underlying integral equations and the theory for ordinary renewal processes is extended to modified and equilibrium renewal processes. The exact results are compared with the MFPT for Markov dichotomous noise and with the results of Monte Carlo simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a recent paper, [J. M. Porrà, J. Masoliver, and K. Lindenberg, Phys. Rev. E 48, 951 (1993)], we derived the equations for the mean first-passage time for systems driven by the coin-toss square wave, a particular type of dichotomous noisy signal, to reach either one of two boundaries. The coin-toss square wave, which we here call periodic-persistent dichotomous noise, is a random signal that can only change its value at specified time points, where it changes its value with probability q or retains its previous value with probability p=1-q. These time points occur periodically at time intervals t. Here we consider the stationary version of this signal, that is, equilibrium periodic-persistent noise. We show that the mean first-passage time for systems driven by this stationary noise does not show either the discontinuities or the oscillations found in the case of nonstationary noise. We also discuss the existence of discontinuities in the mean first-passage time for random one-dimensional stochastic maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Hamiltonian formulation of predictive relativistic systems, the canonical coordinates cannot be the physical positions. The relation between them is given by the individuality differential equations. However, due to the arbitrariness in the choice of Cauchy data, there is a wide family of solutions for these equations. In general, those solutions do not satisfy the condition of constancy of velocities moduli, and therefore we have to reparametrize the world lines into the proper time. We derive here a condition on the Cauchy data for the individuality equations which ensures the constancy of the velocities moduli and makes the reparametrization unnecessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the effects of quantum fluctuations in mean-field quantum spin-glass models with pairwise interactions. We examine the nature of the quantum glass transition at zero temperature in a transverse field. In models (such as the random orthogonal model) where the classical phase transition is discontinuous an analysis using the static approximation reveals that the transition becomes continuous at zero temperature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study numerically the out-of-equilibrium dynamics of the hypercubic cell spin glass in high dimensionalities. We obtain evidence of aging effects qualitatively similar both to experiments and to simulations of low-dimensional models. This suggests that the Sherrington-Kirkpatrick model as well as other mean-field finite connectivity lattices can be used to study these effects analytically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation was initiated to determine the causes of a rutting problem that occurred on Interstate 80 in Adair County. 1-80 from Iowa 25 to the Dallas County line was opened to traffic in November, 1960. The original pavement consisted of 4-1/2" of asphalt cement concrete over 12" of rolled stone base and 12" of granular subbase. A 5-1/2" overlay of asphalt cement concrete was placed in 1964. In 1970-1972, the roadway was resurfaced with 3" of asphalt cement concrete. In 1982, an asphalt cement concrete inlay, designed for a 10-year life, was placed in the eastbound lane. The mix designs for all courses met or exceeded all current criteria being used to formulate job mixes. Field construction reports indicate .that asphalt usage, densities, field voids and filler bitumen determinations were well within specification limits on a very consistent basis. Field laboratory reports indicate that laboratory voids for the base courses were within the prescribed limits for the base course and below the prescribed limits for the surface course. Instructional memorandums do indicate that extreme caution should be exercised when the voids are at or near the lower limits and traffic is not minimal. There is also a provision that provides for field voids controlling when there is a conflict between laboratory voids and field voids. It appears that contract documents do not adequately address the directions that must be taken when this conflict arises since it can readily be shown that laboratory voids must be in the very low or dangerous range if field voids are to be kept below the maximum limit under the current density specifications. A rut depth survey of January, 1983, identified little or no rutting on this section of roadway. Cross sections obtained in October, 1983, identified rutting which ranged from 0 to 0.9" with a general trend of the rutting to increase from a value of approximately 0.3" at MP 88 to a rut depth of 0.7" at MP 98. No areas of significant rutting were identified in the inside lane. Structural evaluation with the Road Rater indicated adequate structural capacity and also indicated that the longitudinal subdrains were functioning properly to provide adequate soil support values. Two pavement sections taken from the driving lane indicated very little distortion in the lower 7" base course. Essentially all of the distortion had occurred in the upper 2" base course and the 1..;1/2" surface course. Analysis of cores taken from this section of Interstate 80 indicated very little densification of either the surface or the upper or lower base courses. The asphalt cement content of both the Type B base courses and the Type A surface course were substantially higher than the intended asphalt cement content. The only explanation for this is that the salvaged material contained a greater percent of asphalt cement than initial extractions indicated. The penetration and viscosity of the blend of new asphalt cement and the asphalt cement recovered from the salvaged material were relatively close to that intended for this project. The 1983 ambient temperatures were extremely high from June 20 through September 10. The rutting is a result of a combination of adverse factors including, (1) high asphalt content, (2) the difference between laboratory and field voids, (3) lack of intermediate sized crushed particles, (4) high ambient temperatures. The high asphalt content in the 2" upper base course produced an asphalt concrete mix that did not exhibit satisfactory resistance to deformation from heavy loading. The majority of the rutting resulted from distortion of the 2" upper base lift. Heater planing is recommended as an interim corrective action. Further recommendation is to design for a 20-year alternative by removing 2-1/2" of material from the driving lane by milling and replacing with 2-1/2" of asphalt concrete with improved stability. This would be .followed by placing 1-1/2" of high quality resurfacing on the entire roadway. Other recommendations include improved density and stability requirements for asphalt concrete on high traffic roadways.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pion spectrum for charged and neutral pions is investigated in pure neutron matter, by letting the pions interact with a neutron Fermi sea in a self-consistent scheme that renormalizes simultaneously the mesons, considered the source of the interaction, and the nucleons. The possibility of obtaining different kinds of pion condensates is investigated with the result that they cannot be reached even for values of the spin-spin correlation parameter, g', far below the range commonly accepted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este artículo, a partir de la inversa de la matriz de varianzas y covarianzas se obtiene el modelo Esperanza-Varianza de Markowitz siguiendo un camino más corto y matemáticamente riguroso. También se obtiene la ecuación de equilibrio del CAPM de Sharpe.