971 resultados para Non linear optical phenomina,
Resumo:
This paper deals with the problem of spatial data mapping. A new method based on wavelet interpolation and geostatistical prediction (kriging) is proposed. The method - wavelet analysis residual kriging (WARK) - is developed in order to assess the problems rising for highly variable data in presence of spatial trends. In these cases stationary prediction models have very limited application. Wavelet analysis is used to model large-scale structures and kriging of the remaining residuals focuses on small-scale peculiarities. WARK is able to model spatial pattern which features multiscale structure. In the present work WARK is applied to the rainfall data and the results of validation are compared with the ones obtained from neural network residual kriging (NNRK). NNRK is also a residual-based method, which uses artificial neural network to model large-scale non-linear trends. The comparison of the results demonstrates the high quality performance of WARK in predicting hot spots, reproducing global statistical characteristics of the distribution and spatial correlation structure.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.
Resumo:
Introduction: Imatinib, a first-line drug for chronic myeloid leukaemia (CML), has been increasingly proposed for therapeutic drug monitoring (TDM), as trough concentrations >=1000 ng/ml (Cmin) have been associated with improved molecular and complete cytogenetic response (CCyR). The pharmacological monitoring project of EUTOS (European Treatment and Outcome Study) was launched to validate retrospectively the correlation between Cmin and response in a large population of patients followed by central TDM in Bordeaux.¦Methods: 1898 CML patients with first TDM 0-9 years after imatinib initiation, providing cytogenetic data along with demographic and comedication (37%) information, were included. Individual Cmin, estimated by non-linear regression (NONMEM), was adjusted to initial standard dose (400 mg/day) and stratified at 1000 ng/ml. Kaplan-Meier estimates of overall cumulative CCyR rates (stratified by sex, age, comedication and Cmin) were compared using asymptotic logrank k-sample test for interval-censored data. Differences in Cmin were assessed by Wilcoxon test.¦Results: There were no significant differences in overall cumulative CCyR rates between Cmin strata, sex and comedication with P-glycoprotein inhibitors/inducers or CYP3A4 inhibitors (p >0.05). Lower rates were observed in 113 young patients <30 years (p = 0.037; 1-year rates: 43% vs 60% in older patients), as well as in 29 patients with CYP3A4 inducers (p = 0.001, 1-year rates: 40% vs 66% without). Higher rates were observed in 108 patients on organic-cation-transporter-1 (hOCT-1) inhibitors (p = 0.034, 1-year rates: 83% vs 56% without). Considering 1-year CCyR rates, a trend towards better response for Cmin above 1000 ng/ml was observed: 64% (95%CI: 60-69%) vs 59% (95%CI: 56-61%). Median Cmin (400 mg/day) was significantly reduced in male patients (732 vs 899ng/ml, p <0.001), young patients <30 years (734 vs 802 ng/ml, p = 0.037) and under CYP3A4 inducers (758 vs 859 ng/ml, p = 0.022). Under hOCT-1 inhibitors, Cmin was increased (939 vs 827 ng/ml, p = 0.038).¦Conclusion: Based on observational TDM data, the impact of imatinib Cmin >1000 ng/ml on CCyR was not salient. Young CML patients (<30 years) and patients taking CYP3A4 inducers probably need close monitoring and possibly higher imatinib doses, due to lower Cmin along with lower CCyR rates. Patients taking hOCT-1 inhibitors seem in contrast to have improved CCyR response rates. The precise role for imatinib TDM remains to be established prospectively.
Resumo:
Comprehensive approach study aimed understanding the reflections and contrasts between personal time and medical therapy protocol time in the life of a young woman with breast cancer. Addressed as a situational study and grounded in Beth’s life story about getting sick and dying of cancer at age 34, the study’s data collection process employed interviews, observation and medical record analysis. The construction of the analytic-synthetic box based on the chronology of Beth’s clinical progression, treatment phases and temporal perception of occurrences enabled us to point out a linear medical therapy protocol time identified by the diagnosis and treatment sequencing process. On the other hand, Beth’s experienced time was marked by simultaneous and non-linear events that generated suffering resulting from the disease. Such comprehension highlights the need for healthcare professionals to take into account the time experienced by the patient, thus providing an indispensable cancer therapeutic protocol with a personal character.
Resumo:
We discuss some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations. This approach has been applied in models of macroeconomics, financial economics, economic growth, contracttheory, etc. It turns out to be a convenient algorithm, especially when there is a large number of state variables and stochastic shocks in the conditional expectations. We discuss some practical issues having to do with the application of the algorithm, and we discuss a Fortran program for implementing the algorithm that is available through the internet.We discuss these issues in a battery of six examples.
Resumo:
In this paper, we study how access pricing affects network competition when subscription demand is elastic and each network uses non-linear prices and can applytermination-based price discrimination. In the case of a fixed per minute terminationcharge, we find that a reduction of the termination charge below cost has two opposing effects: it softens competition but helps to internalize network externalities. Theformer reduces mobile penetration while the latter boosts it. We find that firms always prefer termination charge below cost for either motive while the regulator preferstermination below cost only when this boosts penetration.Next, we consider the retail benchmarking approach (Jeon and Hurkens, 2008)that determines termination charges as a function of retail prices and show that thisapproach allows the regulator to increase penetration without distorting call volumes.
Resumo:
This paper provides a method to estimate time varying coefficients structuralVARs which are non-recursive and potentially overidentified. The procedureallows for linear and non-linear restrictions on the parameters, maintainsthe multi-move structure of standard algorithms and can be used toestimate structural models with different identification restrictions. We studythe transmission of monetary policy shocks and compare the results with thoseobtained with traditional methods.
Resumo:
In order to have references for discussing mathematical menus in political science, Ireview the most common types of mathematical formulae used in physics andchemistry, as well as some mathematical advances in economics. Several issues appearrelevant: variables should be well defined and measurable; the relationships betweenvariables may be non-linear; the direction of causality should be clearly identified andnot assumed on a priori grounds. On these bases, theoretically-driven equations onpolitical matters can be validated by empirical tests and can predict observablephenomena.
Resumo:
This study explored the links between having older siblings who get drunk, satisfaction with the parent-adolescent relationship, parental monitoring, and adolescents' risky drinking. Regression models were conducted based on a national representative sample of 3725 8th to 10th graders in Switzerland (mean age 15.0, SD = .93) who indicated having older siblings. Results showed that both parental factors and older siblings' drinking behaviour shape younger siblings' frequency of risky drinking. Parental monitoring showed a linear dose-response relationship, and siblings' influence had an additive effect. There was a non-linear interaction effect between parent-adolescent relationship and older sibling's drunkenness. The findings suggest that, apart from avoiding an increasingly unsatisfactory relationship with their children, parental monitoring appears to be important in preventing risky drinking by their younger children, even if the older sibling drinks in such a way. However, a satisfying relationship with parents does not seem to be sufficient to counterbalance older siblings' influence.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
This paper presents a comparative analysis of linear and mixed modelsfor short term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay.The series is interpolated with a linear predictor which minimizes theforecast mean square error. The linear models are seasonal ARIMA models and themixed models have a linear component and a non linear seasonal component.The non linear component is estimated by a non parametric regression of dataversus time. Short term forecasts, no more than two days ahead, are of interestbecause they can be used by the port authorities to notice the fleet.Several models are fitted and compared by their forecasting behavior.
Resumo:
We introduce a microscopic method that determines quantitative optical properties beyond the optical diffraction limit and allows direct imaging of unstained living biological specimens. In established holographic microscopy, complex fields are measured using interferometric detection, allowing diffraction-limited phase measurements. Here, we show that non-invasive optical nanoscopy can achieve a lateral resolution of 90 nm by using a quasi-2 pi-holographic detection scheme and complex deconvolution. We record holograms from different illumination directions on the sample plane and observe subwavelength tomographic variations of the specimen. Nanoscale apertures serve to calibrate the tomographic reconstruction and to characterize the imaging system by means of the coherent transfer function. This gives rise to realistic inverse filtering and guarantees true complex field reconstruction. The observations are shown for nanoscopic porous cell frustule (diatoms), for the direct study of bacteria (Escherichia coil), and for a time-lapse approach to explore the dynamics of living dendritic spines (neurones).
Resumo:
Context There are no evidence syntheses available to guide clinicians on when to titrate antihypertensive medication after initiation. Objective To model the blood pressure (BP) response after initiating antihypertensive medication. Data sources electronic databases including Medline, Embase, Cochrane Register and reference lists up to December 2009. Study selection Trials that initiated antihypertensive medication as single therapy in hypertensive patients who were either drug naive or had a placebo washout from previous drugs. Data extraction Office BP measurements at a minimum of two weekly intervals for a minimum of 4 weeks. An asymptotic approach model of BP response was assumed and non-linear mixed effects modelling used to calculate model parameters. Results and conclusions Eighteen trials that recruited 4168 patients met inclusion criteria. The time to reach 50% of the maximum estimated BP lowering effect was 1 week (systolic 0.91 weeks, 95% CI 0.74 to 1.10; diastolic 0.95, 0.75 to 1.15). Models incorporating drug class as a source of variability did not improve fit of the data. Incorporating the presence of a titration schedule improved model fit for both systolic and diastolic pressure. Titration increased both the predicted maximum effect and the time taken to reach 50% of the maximum (systolic 1.2 vs 0.7 weeks; diastolic 1.4 vs 0.7 weeks). Conclusions Estimates of the maximum efficacy of antihypertensive agents can be made early after starting therapy. This knowledge will guide clinicians in deciding when a newly started antihypertensive agent is likely to be effective or not at controlling BP.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).