941 resultados para Network scale-up method
Resumo:
El treball realitzat en aquest projecte es basa en l'implementació d'un demostrador wireless, i més específicament, en l'estudi de les tècniques network coding i virtualització. Network coding és un nou mètode de transmissió de dades que es basa en la codificació de paquets per incrementar el rendiment fins ara obtingut als mètodes de transmissió convencionals. La virtualització és una tècnica que consisteix en compartir de forma més eficient els recursos d'un sistema. En el nostre cas s'utilitzarà la virtualització per dividir una interfície sense fils en diferents usuaris virtuals transmetent i rebent dades simultàniament. L'objectiu del projecte és realitzar un seguit de proves i estudis per veure els avantatges d'aquestes dues tècniques.
Resumo:
Unlike fragmental rockfall runout assessments, there are only few robust methods to quantify rock-mass-failure susceptibilities at regional scale. A detailed slope angle analysis of recent Digital Elevation Models (DEM) can be used to detect potential rockfall source areas, thanks to the Slope Angle Distribution procedure. However, this method does not provide any information on block-release frequencies inside identified areas. The present paper adds to the Slope Angle Distribution of cliffs unit its normalized cumulative distribution function. This improvement is assimilated to a quantitative weighting of slope angles, introducing rock-mass-failure susceptibilities inside rockfall source areas previously detected. Then rockfall runout assessment is performed using the GIS- and process-based software Flow-R, providing relative frequencies for runout. Thus, taking into consideration both susceptibility results, this approach can be used to establish, after calibration, hazard and risk maps at regional scale. As an example, a risk analysis of vehicle traffic exposed to rockfalls is performed along the main roads of the Swiss alpine valley of Bagnes.
Resumo:
Pulsed-field gel electrophoresis (PFGE) is widely used for epidemic investigations of methicillin-resistant Staphylococcus aureus (MRSA). In the present study, we evaluated its use in a long-term epidemiological setting (years to few decades, country to continent level). The clustering obtained from PFGE patterns after SmaI digestion of the DNA of 20 strains was compared to that obtained using a phylogenetic typing method (multiprimer RAPD). The results showed that the analysis of small PFGE bands (10-85kb) correlates better with multiprimer RAPD than the analysis of large PFGE bands (>85-700kb), suggesting that the analysis of small bands would be more suitable for the investigation of long-term epidemiological setting. However, given the technical difficulties to obtain a good resolution of these bands and the putative presence of plasmids among them, PFGE does not appear to be a method of choice for the long-term epidemiology analysis of MRSA.
Resumo:
Background: CMR has recently emerged as a robust and reliable technique to assess coronary artery disease (CAD). A negative perfusion CMR test predicts low event rates of 0.3-0.5%/year. Invasive coronary angiography (CA) remains the "gold standard" for the evaluation of CAD in many countries.Objective: Assessing the costs of the two strategies in the European CMR registry for the work-up of known or suspected CAD from a health care payer perspective. Strategy 1) a CA to all patients or 2) a CA only to patients who are diagnosed positive for ischemia in a prior CMR.Method and results: Using data of the European CMR registry (20 hospitals, 11'040 consecutive patients) we calculated the proportion of patients who were diagnosed positive (20.6%), uncertain (6.5%), and negative (72.9%) after the CMR test in patients with known or suspected CAD (n=2'717). No other medical test was performed to patients who were negative for ischemia. Positive diagnosed patients had a coronary angiography. Those with uncertain diagnosis had additional tests (84.7%: stress echocardiography, 13.1%: CCT, 2.3% SPECT), these costs were added to the CMR strategy costs. Information from costs for tests in Germany and Switzerland were used. A sensibility analysis was performed for inpatient CA. For costs see figure. Results - costs.Discussion: The CMR strategy costs less than the CA strategy for the health insurance systems both, in Germany and Switzerland. While lower in costs, the CMR strategy is a non-invasive one, does not expose to radiation, and yields additional information on cardiac function, viability, valves, and great vessels. Developing the use of CMR instead of CA might imply some reduction in costs together with superior patient safety and comfort, and a better utilization of resources at the hospital level. Document introduit le : 01.12.2011
Resumo:
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as Fokker-Planck-Kolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blow-up issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blow-up always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.
Resumo:
One of the main problems in combating tuberculosis is caused by a poor penetration of drugs into the mycobacterial cells. A prodrug approach via activation inside mycobacterial cells is a possible strategy to overcome this hurdle and achieve efficient drug uptake. Esters are attractive candidates for such a strategy and we and others communicated previously the activity of esters of weak organic acids against mycobacteria. However very little is known about ester hydrolysis by mycobacteria and no biological model is available to study the activation of prodrugs by these microorganisms. To begin filling this gap, we have embarked in a project to develop an in vitro method to study prodrug activation by mycobacteria using Mycobacterium smegmatis homogenates. Model ester substrates were ethyl nicotinate and ethyl benzoate whose hydrolysis was monitored and characterized kinetically. Our studies showed that in M. smegmatis most esterase activity is associated with the soluble fraction (cytosol) and is preserved by storage at 5°C or at room temperature for one hour, or by storage at -80°C up to one year. In the range of homogenate concentrations studied (5-80% in buffer), k(obs) varied linearly with homogenate concentration for both substrates. We also found that the homogenates showed Michaelis-Menten kinetics behavior with both prodrugs. Since ethyl benzoate is a good substrate for the mycobacterial esterases, this compound can be used to standardize the esterasic activity of homogenates, allowing results of incubations of prodrugs with homogenates from different batches to be readily compared.
Resumo:
Diagnostic information on children is typically elicited from both children and their parents. The aims of the present paper were to: (1) compare prevalence estimates according to maternal reports, paternal reports and direct interviews of children [major depressive disorder (MDD), anxiety and attention-deficit and disruptive behavioural disorders]; (2) assess mother-child, father-child and inter-parental agreement for these disorders; (3) determine the association between several child, parent and familial characteristics and the degree of diagnostic agreement or the likelihood of parental reporting; (4) determine the predictive validity of diagnostic information provided by parents and children. Analyses were based on 235 mother-offspring, 189 father-offspring and 128 mother-father pairs. Diagnostic assessment included the Kiddie-schedule for Affective Disorders and Schizophrenia (K-SADS) (offspring) and the Diagnostic Interview for Genetic Studies (DIGS) (parents and offspring at follow-up) interviews. Parental reports were collected using the Family History - Research Diagnostic Criteria (FH-RDC). Analyses revealed: (1) prevalence estimates for internalizing disorders were generally lower according to parental information than according to the K-SADS; (2) mother-child and father-child agreement was poor and within similar ranges; (3) parents with a history of MDD or attention deficit hyperactivity disorder (ADHD) reported these disorders in their children more frequently; (4) in a sub-sample followed-up into adulthood, diagnoses of MDD, separation anxiety and conduct disorder at baseline concurred with the corresponding lifetime diagnosis at age 19 according to the child rather than according to the parents. In conclusion, our findings support large discrepancies of diagnostic information provided by parents and children with generally lower reporting of internalizing disorders by parents, and differential reporting of depression and ADHD by parental disease status. Follow-up data also supports the validity of information provided by adolescent offspring.
Resumo:
Aim: The study aims to describe the activities of the Swiss Early Psychosis Project (SWEPP) which was founded in 1999 as a national network to further and disseminate knowledge on early psychosis (EP) and to enhance collaboration between healthcare groups. Methods: The present paper is a detailed account of the initiation and the development of the Swiss network. We describe all activities such as the several educational campaigns that were addressed to primary and secondary care groups since the early days. We also provide an overview of the current status of EP services throughout the country. Results: Today, most regions in Switzerland provide specialized EP inpatient and/or outpatient services with a clinical or combined clinical research approach that targets at-risk and/or first-episode populations. Some more recently initiated EP services have been launched as collaborative models between several local or regional psychiatric services. Conclusions: The increasing number of EP services and experts in Switzerland may mirror the catalyzing contribution of the Swiss Early Psychosis Project in this important field of health care. The country's small size and the increasing density of specialized services provide excellent bases for larger-scale networking activities in the future, both in clinical and research areas.
Resumo:
Mycolic acids analysis by thin-layer chromatography (TLC) has been employed by several laboratories worldwide as a method for fast identification of mycobacteria. This method was introduced in Brazil by our laboratory in 1992 as a routine identification technique. Up to the present, 861 strains isolated were identified by mycolic acids TLC and by standard biochemical tests; 61% out of these strains came as clinical samples, 4% isolated from frogs and 35% as environmental samples. Mycobacterium tuberculosis strains identified by classical methods were confirmed by their mycolic acids contents (I, III and IV). The method allowed earlier differentiation of M. avium complex - MAC (mycolic acids I, IV and VI) from M. simiae (acids I, II and IV), both with similar biochemical properties. The method also permitted to distinguish M. fortuitum (acids I and V) from M. chelonae (acids I and II) , and to detect mixed mycobacterial infections cases as M. tuberculosis with MAC and M. fortuitum with MAC. Concluding, four years experience shows that mycolic acids TLC is an easy, reliable, fast and inexpensive method, an important tool to put together conventional mycobacteria identification methods.
Resumo:
We present a novel spatiotemporal-adaptive Multiscale Finite Volume (MsFV) method, which is based on the natural idea that the global coarse-scale problem has longer characteristic time than the local fine-scale problems. As a consequence, the global problem can be solved with larger time steps than the local problems. In contrast to the pressure-transport splitting usually employed in the standard MsFV approach, we propose to start directly with a local-global splitting that allows to locally retain the original degree of coupling. This is crucial for highly non-linear systems or in the presence of physical instabilities. To obtain an accurate and efficient algorithm, we devise new adaptive criteria for global update that are based on changes of coarse-scale quantities rather than on fine-scale quantities, as it is routinely done before in the adaptive MsFV method. By means of a complexity analysis we show that the adaptive approach gives a noticeable speed-up with respect to the standard MsFV algorithm. In particular, it is efficient in case of large upscaling factors, which is important for multiphysics problems. Based on the observation that local time stepping acts as a smoother, we devise a self-correcting algorithm which incorporates the information from previous times to improve the quality of the multiscale approximation. We present results of multiphase flow simulations both for Darcy-scale and multiphysics (hybrid) problems, in which a local pore-scale description is combined with a global Darcy-like description. The novel spatiotemporal-adaptive multiscale method based on the local-global splitting is not limited to porous media flow problems, but it can be extended to any system described by a set of conservation equations.
Resumo:
Among the various determinants of treatment response, the achievement of sufficient blood levels is essential for curing malaria. For helping us at improving our current understanding of antimalarial drugs pharmacokinetics, efficacy and toxicity, we have developed a liquid chromatography-tandem mass spectrometry method (LC-MS/MS) requiring 200mul of plasma for the simultaneous determination of 14 antimalarial drugs and their metabolites which are the components of the current first-line combination treatments for malaria (artemether, artesunate, dihydroartemisinin, amodiaquine, N-desethyl-amodiaquine, lumefantrine, desbutyl-lumefantrine, piperaquine, pyronaridine, mefloquine, chloroquine, quinine, pyrimethamine and sulfadoxine). Plasma is purified by a combination of protein precipitation, evaporation and reconstitution in methanol/ammonium formate 20mM (pH 4.0) 1:1. Reverse-phase chromatographic separation of antimalarial drugs is obtained using a gradient elution of 20mM ammonium formate and acetonitrile both containing 0.5% formic acid, followed by rinsing and re-equilibration to the initial solvent composition up to 21min. Analyte quantification, using matrix-matched calibration samples, is performed by electro-spray ionization-triple quadrupole mass spectrometry by selected reaction monitoring detection in the positive mode. The method was validated according to FDA recommendations, including assessment of extraction yield, matrix effect variability, overall process efficiency, standard addition experiments as well as antimalarials short- and long-term stability in plasma. The reactivity of endoperoxide-containing antimalarials in the presence of hemolysis was tested both in vitro and on malaria patients samples. With this method, signal intensity of artemisinin decreased by about 20% in the presence of 0.2% hemolysed red-blood cells in plasma, whereas its derivatives were essentially not affected. The method is precise (inter-day CV%: 3.1-12.6%) and sensitive (lower limits of quantification 0.15-3.0 and 0.75-5ng/ml for basic/neutral antimalarials and artemisinin derivatives, respectively). This is the first broad-range LC-MS/MS assay covering the currently in-use antimalarials. It is an improvement over previous methods in terms of convenience (a single extraction procedure for 14 major antimalarials and metabolites reducing significantly the analytical time), sensitivity, selectivity and throughput. While its main limitation is investment costs for the equipment, plasma samples can be collected in the field and kept at 4 degrees C for up to 48h before storage at -80 degrees C. It is suited to detecting the presence of drug in subjects for screening purposes and quantifying drug exposure after treatment. It may contribute to filling the current knowledge gaps in the pharmacokinetics/pharmacodynamics relationships of antimalarials and better define the therapeutic dose ranges in different patient populations.
Resumo:
Report for the scientific sojourn at the UC Berkeley, USA, from march until july 2008. This document starts by surveying the literature on economic federalism and relating it to network industries. The insights and some new developments (which focus on the role of interjurisdictional externalities, multiple objectives and investment incentives) are used to analyze regulatory arrangements in telecommunications and energy in the EU and the US. In the long history of vertically integrated monopolies in telecommunications and energy, there was a historical trend to move regulation up in the vertical structure of government, at least form the local level to the state or nation-state level. This move alleviated the pressure on regulators to renege on the commitment not to expropriate sunk investments, although it did not eliminate the practice of taxation by regulation that was the result of multiple interest group action. Although central or federal policy making is more focused and especialized and makes it difficult for more interest groups to organize, it is not clear that under all conditions central powers will not be associated with underinvestment. When technology makes the introduction of competition in some segments possible, the possibilities for organizing the institutional architechture of regulation expand. The central level may focus on structural regulation and the location of behavioral regulation of the remaining monopolists may be resolved in a cooperative way or concentrated at the level where the relevant spillovers are internalized.
Resumo:
We present a novel hybrid (or multiphysics) algorithm, which couples pore-scale and Darcy descriptions of two-phase flow in porous media. The flow at the pore-scale is described by the Navier?Stokes equations, and the Volume of Fluid (VOF) method is used to model the evolution of the fluid?fluid interface. An extension of the Multiscale Finite Volume (MsFV) method is employed to construct the Darcy-scale problem. First, a set of local interpolators for pressure and velocity is constructed by solving the Navier?Stokes equations; then, a coarse mass-conservation problem is constructed by averaging the pore-scale velocity over the cells of a coarse grid, which act as control volumes; finally, a conservative pore-scale velocity field is reconstructed and used to advect the fluid?fluid interface. The method relies on the localization assumptions used to compute the interpolators (which are quite straightforward extensions of the standard MsFV) and on the postulate that the coarse-scale fluxes are proportional to the coarse-pressure differences. By numerical simulations of two-phase problems, we demonstrate that these assumptions provide hybrid solutions that are in good agreement with reference pore-scale solutions and are able to model the transition from stable to unstable flow regimes. Our hybrid method can naturally take advantage of several adaptive strategies and allows considering pore-scale fluxes only in some regions, while Darcy fluxes are used in the rest of the domain. Moreover, since the method relies on the assumption that the relationship between coarse-scale fluxes and pressure differences is local, it can be used as a numerical tool to investigate the limits of validity of Darcy's law and to understand the link between pore-scale quantities and their corresponding Darcy-scale variables.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.