971 resultados para semi-implicit projection method
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
Objectives. The goal of this study is to evaluate a T2-mapping sequence by: (i) measuring the reproducibility intra- and inter-observer variability in healthy volunteers in two separate scanning session with a T2 reference phantom; (2) measuring the mean T2 relaxation times by T2-mapping in infarcted myocardium in patients with subacute MI and compare it with patient's the gold standard X-ray coronary angiography and healthy volunteers results. Background. Myocardial edema is a consequence of an inflammation of the tissue, as seen in myocardial infarct (MI). It can be visualized by cardiovascular magnetic resonance (CMR) imaging using the T2 relaxation time. T2-mapping is a quantitative methodology that has the potential to address the limitation of the conventional T2-weighted (T2W) imaging. Methods. The T2-mapping protocol used for all MRI scans consisted in a radial gradient echo acquisition with a lung-liver navigator for free-breathing acquisition and affine image registration. Mid-basal short axis slices were acquired.T2-maps analyses: 2 observers semi- automatically segmented the left ventricle in 6 segments accordingly to the AHA standards. 8 healthy volunteers (age: 27 ± 4 years; 62.5% male) were scanned in 2 separate sessions. 17 patients (age : 61.9 ± 13.9 years; 82.4% male) with subacute STEMI (70.6%) and NSTEMI underwent a T2-mapping scanning session. Results. In healthy volunteers, the mean inter- and intra-observer variability over the entire short axis slice (segment 1 to 6) was 0.1 ms (95% confidence interval (CI): -0.4 to 0.5, p = 0.62) and 0.2 ms (95% CI: -2.8 to 3.2, p = 0.94, respectively. T2 relaxation time measurements with and without the correction of the phantom yielded an average difference of 3.0 ± 1.1 % and 3.1 ± 2.1 % (p = 0.828), respectively. In patients, the inter-observer variability in the entire short axis slice (S1-S6), was 0.3 ms (95% CI: -1.8 to 2.4, p = 0.85). Edema location as determined through the T2-mapping and the coronary artery occlusion as determined on X-ray coronary angiography correlated in 78.6%, but only in 60% in apical infarcts. All except one of the maximal T2 values in infarct patients were greater than the upper limit of the 95% confidence interval for normal myocardium. Conclusions. The T2-mapping methodology is accurate in detecting infarcted, i.e. edematous tissue in patients with subacute infarcts. This study further demonstrated that this T2-mapping technique is reproducible and robust enough to be used on a segmental basis for edema detection without the need of a phantom to yield a T2 correction factor. This new quantitative T2-mapping technique is promising and is likely to allow for serial follow-up studies in patients to improve our knowledge on infarct pathophysiology, on infarct healing, and for the assessment of novel treatment strategies for acute infarctions.
Resumo:
Abstract OBJECTIVE To assess the nursing workload (NW) in Semi-intensive Therapy Unit, specialized in the care of children with Craniofacial anomalies and associated syndromes; to compare the amount of workforce required according to the Nursing Activities Score (NAS) and the COFEN Resolution 293/04. METHOD Cross-sectional study, whose sample was composed of 72 patients. Nursing workload was assessed through retrospective application of the NAS. RESULTS the NAS mean was 49.5%. Nursing workload for the last day of hospitalization was lower in patients being discharged to home (p<0.001) and higher on the first compared to last day of hospitalization (p< 0.001). The number of professionals required according to NAS was superior to the COFEN Resolution 293/04, being 17 and 14, respectively. CONCLUSION the nursing workload corresponded to approximately 50% of the working time of nursing professional and was influenced by day and outcome of hospitalization. The amount of professionals was greater than that determined by the existing legislation.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
The physical disector is a method of choice for estimating unbiased neuron numbers; nevertheless, calibration is needed to evaluate each counting method. The validity of this method can be assessed by comparing the estimated cell number with the true number determined by a direct counting method in serial sections. We reconstructed a 1/5 of rat lumbar dorsal root ganglia taken from two experimental conditions. From each ganglion, images of 200 adjacent semi-thin sections were used to reconstruct a volumetric dataset (stack of voxels). On these stacks the number of sensory neurons was estimated and counted respectively by physical disector and direct counting methods. Also, using the coordinates of nuclei from the direct counting, we simulate, by a Matlab program, disector pairs separated by increasing distances in a ganglion model. The comparison between the results of these approaches clearly demonstrates that the physical disector method provides a valid and reliable estimate of the number of sensory neurons only when the distance between the consecutive disector pairs is 60 microm or smaller. In these conditions the size of error between the results of physical disector and direct counting does not exceed 6%. In contrast when the distance between two pairs is larger than 60 microm (70-200 microm) the size of error increases rapidly to 27%. We conclude that the physical dissector method provides a reliable estimate of the number of rat sensory neurons only when the separating distance between the consecutive dissector pairs is no larger than 60 microm.
Resumo:
Background: b-value is the parameter characterizing the intensity of the diffusion weighting during image acquisition. Data acquisition is usually performed with low b value (b~1000 s/mm2). Evidence shows that high b-values (b>2000 s/mm2) are more sensitive to the slow diffusion compartment (SDC) and maybe more sensitive in detecting white matter (WM) anomalies in schizophrenia.Methods: 12 male patients with schizophrenia (mean age 35 +/-3 years) and 16 healthy male controls matched for age were scanned with a low b-value (1000 s/mm2) and a high b-value (4000 s/mm2) protocol. Apparent diffusion coefficient (ADC) is a measure of the average diffusion distance of water molecules per time unit (mm2/s). ADC maps were generated for all individuals. 8 region of interests (frontal and parietal region bilaterally, centrum semi-ovale bilaterally and anterior and posterior corpus callosum) were manually traced blind to diagnosis.Results: ADC measures acquired with high b-value imaging were more sensitive in detecting differences between schizophrenia patients and healthy controls than low b-value imaging with a gain in significance by a factor of 20- 100 times despite the lower image Signal-to-noise ratio (SNR). Increased ADC was identified in patient's WM (p=0.00015) with major contributions from left and right centrum semi-ovale and to a lesser extent right parietal region.Conclusions: Our results may be related to the sensitivity of high b-value imaging to the SDC believed to reflect mainly the intra-axonal and myelin bound water pool. High b-value imaging might be more sensitive and specific to WM anomalies in schizophrenia than low b-value imaging
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of p H and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups.
Resumo:
There are many known examples of multiple semi-independent associations at individual loci; such associations might arise either because of true allelic heterogeneity or because of imperfect tagging of an unobserved causal variant. This phenomenon is of great importance in monogenic traits but has not yet been systematically investigated and quantified in complex-trait genome-wide association studies (GWASs). Here, we describe a multi-SNP association method that estimates the effect of loci harboring multiple association signals by using GWAS summary statistics. Applying the method to a large anthropometric GWAS meta-analysis (from the Genetic Investigation of Anthropometric Traits consortium study), we show that for height, body mass index (BMI), and waist-to-hip ratio (WHR), 3%, 2%, and 1%, respectively, of additional phenotypic variance can be explained on top of the previously reported 10% (height), 1.5% (BMI), and 1% (WHR). The method also permitted a substantial increase (by up to 50%) in the number of loci that replicate in a discovery-validation design. Specifically, we identified 74 loci at which the multi-SNP, a linear combination of SNPs, explains significantly more variance than does the best individual SNP. A detailed analysis of multi-SNPs shows that most of the additional variability explained is derived from SNPs that are not in linkage disequilibrium with the lead SNP, suggesting a major contribution of allelic heterogeneity to the missing heritability.
Resumo:
Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.
Resumo:
Peer-reviewed
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.
Resumo:
Työn tavoite oli kehittää karakterisointimenetelmät kalkkikiven ja polttoaineen tuhkan jauhautumisen ennustamiselle kiertoleijukattilan tulipesässä. Kiintoainekäyttäytymisen karakterisoinnilla ja mallintamisella voidaan tarkentaa tulipesän lämmönsiirron ja tuhkajaon ennustamista. Osittain kokeelliset karakterisointimenetelmät perustuvat kalkkikiven jauhautumiseen laboratoriokokoluokan leijutetussa kvartsiputkireaktorissa ja tuhkan jauhatumiseen rotaatiomyllyssä. Karakterisointimenetelmät ottavat huomioon eri-laiset toimintaolosuhteet kaupallisen kokoluokan kiertoleijukattiloissa. Menetelmät kelpoistettiin kaupallisen kokoluokan kiertoleijukattiloista mitattujen ja fraktioittaisella kiintoainemallilla mallinnettujen taseiden avulla. Kelpoistamistaseiden vähäisyydestä huolimatta karakterisointimenetelmät arvioitiin virhetarkastelujen perusteella järkeviksi. Karakterisointimenetelmien kehittämistä ja tarkentamista tullaan jatkamaan.
Resumo:
The results of semiempirical molecular orbital calculations performed on aziridinone and diaziridinone employing the MNDO, AM1, and PM3 molecular models are presented. The AM1 method, which best reproduces ground-state molecular properties, is used to calculate electronic parameters and the use of these parameters for the evaluation of reactivity is discussed.