86 resultados para nonlinear least-square fit
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.
Resumo:
1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.
Resumo:
BACKGROUND: Combination highly active antiretroviral therapy (HAART) has significantly decreased HIV-1 related morbidity and mortality globally transforming HIV into a controllable condition. HAART has a number of limitations though, including limited access in resource constrained countries, which have driven the search for simpler, affordable HIV-1 treatment modalities. Therapeutic HIV-1 vaccines aim to provide immunological support to slow disease progression and decrease transmission. We evaluated the safety, immunogenicity and clinical effect of a novel recombinant plasmid DNA therapeutic HIV-1 vaccine, GTU(®)-multi-HIVB, containing 6 different genes derived from an HIV-1 subtype B isolate. METHODS: 63 untreated, healthy, HIV-1 infected, adults between 18 and 40 years were enrolled in a single-blinded, placebo-controlled Phase II trial in South Africa. Subjects were HIV-1 subtype C infected, had never received antiretrovirals, with CD4 ≥ 350 cells/mm(3) and pHIV-RNA ≥ 50 copies/mL at screening. Subjects were allocated to vaccine or placebo groups in a 2:1 ratio either administered intradermally (ID) (0.5mg/dose) or intramuscularly (IM) (1mg/dose) at 0, 4 and 12 weeks boosted at 76 and 80 weeks with 1mg/dose (ID) and 2mg/dose (IM), respectively. Safety was assessed by adverse event monitoring and immunogenicity by HIV-1-specific CD4+ and CD8+ T-cells using intracellular cytokine staining (ICS), pHIV-RNA and CD4 counts. RESULTS: Vaccine was safe and well tolerated with no vaccine related serious adverse events. Significant declines in log pHIV-RNA (p=0.012) and increases in CD4+ T cell counts (p=0.066) were observed in the vaccine group compared to placebo, more pronounced after IM administration and in some HLA haplotypes (B*5703) maintained for 17 months after the final immunisation. CONCLUSIONS: The GTU(®)-multi-HIVB plasmid recombinant DNA therapeutic HIV-1 vaccine is safe, well tolerated and favourably affects pHIV-RNA and CD4 counts in untreated HIV-1 infected individuals after IM administration in subjects with HLA B*57, B*8101 and B*5801 haplotypes.
Resumo:
INTRODUCTION. A two-step assessment (readiness to wean (RW) followed by spontaneousbreathing trial (SBT)) of predefined criteria is recommended before planned extubation(PE)1.OBJECTIVES. We aimed to evaluate if compliance to all guideline criteria was associatedwith better respiratory outcome within 48 h after PE.METHODS. The data (extracted from our clinical information system) of 458 consecutivepatients who underwent PE after C48 h of invasive ventilation in our medico-surgical ICUwere analyzed. We evaluated compliance with guidelines [1] regarding respiratory rate, tidalvolume, PaO2, FiO2, PEEP, PaCO2, pH, heart rate, systolic arterial pressure and arrhythmiaduringRWand SBT assessment (RW and SBT within 2 h). A patient was classified as RW+ ifallRWcriteria were fulfilled andRW-if at least 1 criterion was violated. The same approachwas used to define SBT+ and SBT- patients. During the 48 h following PE, we assessed theoccurrence of post-PE respiratory failure (PRF) (defined as the presence of at least 1 consensuscriterion of respiratory failure [1]), reintubation (after NIV failure or because of immediateintubation criteria) and cumulative duration of post-PE ventilation (PPEV = Post-PE invasive+ non-invasive ventilation). ICU mortality was recorded. Comparisons for variousoutcomes were performed by Chi-square and t tests.RESULTS. All consensus criteria were fulfilled in 77.3% of the patients during RW and in68.1% of the patients during SBT.[Compliance to weaning criteria and outcome]N = 458 PRF (%) Reintubation (%) PPEV (min) ICU mortality (%)All patients 53.5 10.0 542 ± 664 6.1RW+ 50.0 9.3 490 ± 626 5.4RW- 65.4* 12.5 718 ± 757** 8.7SBT+ 52.6 8.0 498 ± 594 6.7SBT- 55.5 14.4*** 637 ± 788**** 4.8Occurrence of PRF only was not associated with increased ICU mortality: 4.2 versus 7.8%,p = 0.11. By contrast, ICU mortality was significantly increased in patients requiring reintubation:21.7 versus 4.4%. p\0.001; * p = 0.006 RW+ versus RW-; ** p = 0.003RW+ versus RW-; *** p = 0.035 SBT+ versus SBT-; **** p = 0.030 SBT+ versusSBTCONCLUSIONS.In our ICU, compliance to all criteria of the two-step published approach ofrespiratory weaning was not optimal but reintubation rate was comparable to published data.Compliance with consensus conference guidelines was associated with lower reintubation rateand shorter PPEV but not with ICU mortality. As mortality was increased by reintubation,more sensitive and specific criteria to predict the risk of reintubation are probably needed.REFERENCE. Boles JM, et al. Eur Respir J 2007;29:1033-56.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Objectives: Acetate brain metabolism has the particularity to occur specifically in glial cells. Labeling studies, using acetate labeled either with 13C (NMR) or 11C (PET), are governed by the same biochemical reactions and thus follow the same mathematical principles. In this study, the objective was to adapt an NMR acetate brain metabolism model to analyse [1-11C]acetate infusion in rats. Methods: Brain acetate infusion experiments were modeled using a two-compartment model approach used in NMR.1-3 The [1-11C]acetate labeling study was done using a beta scintillator.4 The measured radioactive signal represents the time evolution of the sum of all labeled metabolites in the brain. Using a coincidence counter in parallel, an arterial input curve was measured. The 11C at position C-1 of acetate is metabolized in the first turn of the TCA cycle to the position 5 of glutamate (Figure 1A). Through the neurotransmission process, it is further transported to the position 5 of glutamine and the position 5 of neuronal glutamate. After the second turn of the TCA cycle, tracer from [1-11C]acetate (and also a part from glial [5-11C]glutamate) is transferred to glial [1-11C]glutamate and further to [1-11C]glutamine and neuronal glutamate through the neurotransmission cycle. Brain poster session: oxidative mechanisms S460 Journal of Cerebral Blood Flow & Metabolism (2009) 29, S455-S466 Results: The standard acetate two-pool PET model describes the system by a plasma pool and a tissue pool linked by rate constants. Experimental data are not fully described with only one tissue compartment (Figure 1B). The modified NMR model was fitted successfully to tissue time-activity curves from 6 single animals, by varying the glial mitochondrial fluxes and the neurotransmission flux Vnt. A glial composite rate constant Kgtg=Vgtg/[Ace]plasma was extracted. Considering an average acetate concentration in plasma of 1 mmol/g5 and the negligible additional amount injected, we found an average Vgtg = 0.08±0.02 (n = 6), in agreement with previous NMR measurements.1 The tissue time-activity curve is dominated by glial glutamate and later by glutamine (Figure 1B). Labeling of neuronal pools has a low influence, at least for the 20 mins of beta-probe acquisition. Based on the high diffusivity of CO2 across the blood-brain barrier; 11CO2 is not predominant in the total tissue curve, even if the brain CO2 pool is big compared with other metabolites, due to its strong dilution through unlabeled CO2 from neuronal metabolism and diffusion from plasma. Conclusion: The two-compartment model presented here is also able to fit data of positron emission experiments and to extract specific glial metabolic fluxes. 11C-labeled acetate presents an alternative for faster measurements of glial oxidative metabolism compared to NMR, potentially applicable to human PET imaging. However, to quantify the relative value of the TCA cycle flux compared to the transmitochondrial flux, the chemical sensitivity of NMR is required. PET and NMR are thus complementary.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
The reliable and objective assessment of chronic disease state has been and still is a very significant challenge in clinical medicine. An essential feature of human behavior related to the health status, the functional capacity, and the quality of life is the physical activity during daily life. A common way to assess physical activity is to measure the quantity of body movement. Since human activity is controlled by various factors both extrinsic and intrinsic to the body, quantitative parameters only provide a partial assessment and do not allow for a clear distinction between normal and abnormal activity. In this paper, we propose a methodology for the analysis of human activity pattern based on the definition of different physical activity time series with the appropriate analysis methods. The temporal pattern of postures, movements, and transitions between postures was quantified using fractal analysis and symbolic dynamics statistics. The derived nonlinear metrics were able to discriminate patterns of daily activity generated from healthy and chronic pain states.
Resumo:
Relaxation rates provide important information about tissue microstructure. Multi-parameter mapping (MPM) estimates multiple relaxation parameters from multi-echo FLASH acquisitions with different basic contrasts, i.e., proton density (PD), T1 or magnetization transfer (MT) weighting. Motion can particularly affect maps of the apparent transverse relaxation rate R2(*), which are derived from the signal of PD-weighted images acquired at different echo times. To address the motion artifacts, we introduce ESTATICS, which robustly estimates R2(*) from images even when acquired with different basic contrasts. ESTATICS extends the fitted signal model to account for inherent contrast differences in the PDw, T1w and MTw images. The fit was implemented as a conventional ordinary least squares optimization and as a robust fit with a small or large confidence interval. These three different implementations of ESTATICS were tested on data affected by severe motion artifacts and data with no prominent motion artifacts as determined by visual assessment or fast optical motion tracking. ESTATICS improved the quality of the R2(*) maps and reduced the coefficient of variation for both types of data-with average reductions of 30% when severe motion artifacts were present. ESTATICS can be applied to any protocol comprised of multiple 2D/3D multi-echo FLASH acquisitions as used in the general research and clinical setting.
Resumo:
Object The goal of this study was to establish whether clear patterns of initial pain freedom could be identified when treating patients with classic trigeminal neuralgia (TN) by using Gamma Knife surgery (GKS). The authors compared hypesthesia and pain recurrence rates to see if statistically significant differences could be found. Methods Between July 1992 and November 2010, 737 patients presenting with TN underwent GKS and prospective evaluation at Timone University Hospital in Marseille, France. In this study the authors analyzed the cases of 497 of these patients, who participated in follow-up longer than 1 year, did not have megadolichobasilar artery- or multiple sclerosis-related TN, and underwent GKS only once; in other words, the focus was on cases of classic TN with a single radiosurgical treatment. Radiosurgery was performed with a Leksell Gamma Knife (model B, C, or Perfexion) using both MR and CT imaging targeting. A single 4-mm isocenter was positioned in the cisternal portion of the trigeminal nerve at a median distance of 7.8 mm (range 4.5-14 mm) anterior to the emergence of the nerve. A median maximum dose of 85 Gy (range 70-90 Gy) was delivered. Using empirical methods and assisted by a chart with clear cut-off periods of pain free distribution, the authors were able to divide patients who experienced freedom from pain into 3 separate groups: patients who became pain free within the first 48 hours post-GKS; those who became pain free between 48 hours and 30 days post-GKS; and those who became pain free more than 30 days after GKS. Results The median age in the 497 patients was 68.3 years (range 28.1-93.2 years). The median follow-up period was 43.75 months (range 12-174.41 months). Four hundred fifty-four patients (91.34%) were initially pain free within a median time of 10 days (range 1-459 days) after GKS. One hundred sixty-nine patients (37.2%) became pain free within the first 48 hours (Group PF(≤ 48 hours)), 194 patients (42.8%) between posttreatment Day 3 and Day 30 (Group PF((>48 hours, ≤ 30 days))), and 91 patients (20%) after 30 days post-GKS (Group PF(>30 days)). Differences in postoperative hypesthesia were found: in Group PF(≤ 48 hours) 18 patients (13.7%) developed postoperative hypesthesia, compared with 30 patients (19%) in Group PF((>48 hours, ≤ 30 days)) and 22 patients (30.6%) in Group PF(>30 days) (p = 0.014). One hundred fifty-seven patients (34.4%) who initially became free from pain experienced a recurrence of pain with a median delay of 24 months (range 0.62-150.06 months). There were no statistically significant differences between the patient groups with respect to pain recurrence: 66 patients (39%) in Group PF(≤ 48 hours) experienced pain recurrence, compared with 71 patients (36.6%) in Group PF((>48 hours, ≤ 30 days)) and 27 patients (29.7%) in Group PF(>30 days) (p = 0.515). Conclusions A substantial number of patients (169 cases, 37.2%) became pain free within the first 48 hours. The rate of hypesthesia was higher in patients who became pain free more than 30 days after GKS, with a statistically significant difference between patient groups (p = 0.014).
Resumo:
Rhythmic activity plays a central role in neural computations and brain functions ranging from homeostasis to attention, as well as in neurological and neuropsychiatric disorders. Despite this pervasiveness, little is known about the mechanisms whereby the frequency and power of oscillatory activity are modulated, and how they reflect the inputs received by neurons. Numerous studies have reported input-dependent fluctuations in peak frequency and power (as well as couplings across these features). However, it remains unresolved what mediates these spectral shifts among neural populations. Extending previous findings regarding stochastic nonlinear systems and experimental observations, we provide analytical insights regarding oscillatory responses of neural populations to stimulation from either endogenous or exogenous origins. Using a deceptively simple yet sparse and randomly connected network of neurons, we show how spiking inputs can reliably modulate the peak frequency and power expressed by synchronous neural populations without any changes in circuitry. Our results reveal that a generic, non-nonlinear and input-induced mechanism can robustly mediate these spectral fluctuations, and thus provide a framework in which inputs to the neurons bidirectionally regulate both the frequency and power expressed by synchronous populations. Theoretical and computational analysis of the ensuing spectral fluctuations was found to reflect the underlying dynamics of the input stimuli driving the neurons. Our results provide insights regarding a generic mechanism supporting spectral transitions observed across cortical networks and spanning multiple frequency bands.
Resumo:
BACKGROUND: We have developed a nonviral gene therapy method based on the electrotransfer of plasmid in the ciliary muscle. These easily accessible smooth muscle cells could be turned into a biofactory for any therapeutic proteins to be secreted in a sustained manner in the ocular media. METHODS: Electrical conditions, design of electrodes, plasmid formulation, method and number of injections were optimized in vivo in the rat by localizing β-galactosidase expression and quantifying reporter (luciferase) and therapeutic (anti-tumor necrosis factor) proteins secretion in the ocular media. Anatomical measurements were performed via human magnetic resonance imaging to design a human eye-sized prototype that was tested in the rabbit. RESULTS: In the rat, transscleral injection of 30 µg of plasmid diluted in half saline (77 mM NaCl) followed by application of eight square-wave electrical pulses (15 V, 10 ms, 5.3 Hz) using two platinum/iridium electrodes, an internal wire and an external sheet, delivered plasmid efficiently to the ciliary muscle fibers. Gene transfer resulted in a long-lasting (at least 5 months) and plasmid dose-/injection number- dependent secretion of different molecular weight proteins mainly in the vitreous, without any systemic exposure. Because ciliary muscle anatomical measurements remained constant among ages in adult humans, an integrated device comprising needle-electrodes was designed and manufactured. Its usefulness was validated in the rabbit. CONCLUSIONS: Plasmid electrotransfer to the ciliary muscle with a suitable medical device represents a promising local and sustained protein delivery system for treating posterior segment diseases, avoiding repeated intraocular injections.
Resumo:
Aim Structure of the Thesis In the first article, I focus on the context in which the Homo Economicus was constructed - i.e., the conception of economic actors as fully rational, informed, egocentric, and profit-maximizing. I argue that the Homo Economicus theory was developed in a specific societal context with specific (partly tacit) values and norms. These norms have implicitly influenced the behavior of economic actors and have framed the interpretation of the Homo Economicus. Different factors however have weakened this implicit influence of the broader societal values and norms on economic actors. The result is an unbridled interpretation and application of the values and norms of the Homo Economicus in the business environment, and perhaps also in the broader society. In the second article, I show that the morality of many economic actors relies on isomorphism, i.e., the attempt to fit into the group by adopting the moral norms surrounding them. In consequence, if the norms prevailing in a specific group or context (such as a specific region or a specific industry) change, it can be expected that actors with an 'isomorphism morality' will also adapt their ethical thinking and their behavior -for the 'better' or for the 'worse'. The article further describes the process through which corporations could emancipate from the ethical norms prevailing in the broader society, and therefore develop an institution with specific norms and values. These norms mainly rely on mainstream business theories praising the economic actor's self-interest and neglecting moral reasoning. Moreover, because of isomorphism morality, many economic actors have changed their perception of ethics, and have abandoned the values prevailing in the broader society in order to adopt those of the economic theory. Finally, isomorphism morality also implies that these economic actors will change their morality again if the institutional context changes. The third article highlights the role and responsibility of business scholars in promoting a systematic reflection and self-critique of the business system and develops alternative models to fill the moral void of the business institution and its inherent legitimacy crisis. Indeed, the current business institution relies on assumptions such as scientific neutrality and specialization, which seem at least partly challenged by two factors. First, self-fulfilling prophecy provides scholars with an important (even if sometimes undesired) normative influence over practical life. Second, the increasing complexity of today's (socio-political) world and interactions between the different elements constituting our society question the strong specialization of science. For instance, economic theories are not unrelated to psychology or sociology, and economic actors influence socio-political structures and processes, e.g., through lobbying (Dobbs, 2006; Rondinelli, 2002), or through marketing which changes not only the way we consume, but more generally tries to instill a specific lifestyle (Cova, 2004; M. K. Hogg & Michell, 1996; McCracken, 1988; Muniz & O'Guinn, 2001). In consequence, business scholars are key actors in shaping both tomorrow's economic world and its broader context. A greater awareness of this influence might be a first step toward an increased feeling of civic responsibility and accountability for the models and theories developed or taught in business schools.