141 resultados para Exponential Splines
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.
Resumo:
The measurement of radon ((222)Rn) activity flux using activated charcoal canisters was examined to investigate the distribution of the adsorbed (222)Rn in the charcoal bed and the relationship between (222)Rn activity flux and exposure time. The activity flux of (222)Rn from five sources of varying strengths was measured for exposure times of one, two, three, five, seven, 10, and 14 days. The distribution of the adsorbed (222)Rn in the charcoal bed was obtained by dividing the bed into six layers and counting each layer separately after the exposure. (222)Rn activity decreased in the layers that were away from the exposed surface. Nevertheless, the results demonstrated that only a small correction might be required in the actual application of charcoal canisters for activity flux measurement, where calibration standards were often prepared by the uniform mixing of radium ((226)Ra) in the matrix. This was because the diffusion of (222)Rn in the charcoal bed and the detection efficiency as a function of the charcoal depth tended to counterbalance each other. The influence of exposure time on the measured (222)Rn activity flux was observed in two situations of the canister exposure layout: (a) canister sealed to an open bed of the material and (b) canister sealed over a jar containing the material. The measured (222)Rn activity flux decreased as the exposure time increased. The change in the former situation was significant with an exponential decrease as the exposure time increased. In the latter case, lesser reduction was noticed in the observed activity flux with respect to exposure time. This reduction might have been related to certain factors, such as absorption site saturation or the back diffusion of (222)Rn gas occurring at the canister-soil interface.
Resumo:
Based on protein molecular dynamics, we investigate the fractal properties of energy, pressure and volume time series using the multifractal detrended fluctuation analysis (MF-DFA) and the topological and fractal properties of their converted horizontal visibility graphs (HVGs). The energy parameters of protein dynamics we considered are bonded potential, angle potential, dihedral potential, improper potential, kinetic energy, Van der Waals potential, electrostatic potential, total energy and potential energy. The shape of the h(q)h(q) curves from MF-DFA indicates that these time series are multifractal. The numerical values of the exponent h(2)h(2) of MF-DFA show that the series of total energy and potential energy are non-stationary and anti-persistent; the other time series are stationary and persistent apart from series of pressure (with H≈0.5H≈0.5 indicating the absence of long-range correlation). The degree distributions of their converted HVGs show that these networks are exponential. The results of fractal analysis show that fractality exists in these converted HVGs. For each energy, pressure or volume parameter, it is found that the values of h(2)h(2) of MF-DFA on the time series, exponent λλ of the exponential degree distribution and fractal dimension dBdB of their converted HVGs do not change much for different proteins (indicating some universality). We also found that after taking average over all proteins, there is a linear relationship between 〈h(2)〉〈h(2)〉 (from MF-DFA on time series) and 〈dB〉〈dB〉 of the converted HVGs for different energy, pressure and volume.
Resumo:
Many studies have shown that we can gain additional information on time series by investigating their accompanying complex networks. In this work, we investigate the fundamental topological and fractal properties of recurrence networks constructed from fractional Brownian motions (FBMs). First, our results indicate that the constructed recurrence networks have exponential degree distributions; the average degree exponent 〈λ〉 increases first and then decreases with the increase of Hurst index H of the associated FBMs; the relationship between H and 〈λ〉 can be represented by a cubic polynomial function. We next focus on the motif rank distribution of recurrence networks, so that we can better understand networks at the local structure level. We find the interesting superfamily phenomenon, i.e., the recurrence networks with the same motif rank pattern being grouped into two superfamilies. Last, we numerically analyze the fractal and multifractal properties of recurrence networks. We find that the average fractal dimension 〈dB〉 of recurrence networks decreases with the Hurst index H of the associated FBMs, and their dependence approximately satisfies the linear formula 〈dB〉≈2-H, which means that the fractal dimension of the associated recurrence network is close to that of the graph of the FBM. Moreover, our numerical results of multifractal analysis show that the multifractality exists in these recurrence networks, and the multifractality of these networks becomes stronger at first and then weaker when the Hurst index of the associated time series becomes larger from 0.4 to 0.95. In particular, the recurrence network with the Hurst index H=0.5 possesses the strongest multifractality. In addition, the dependence relationships of the average information dimension 〈D(1)〉 and the average correlation dimension 〈D(2)〉 on the Hurst index H can also be fitted well with linear functions. Our results strongly suggest that the recurrence network inherits the basic characteristic and the fractal nature of the associated FBM series.
Resumo:
The focus of this paper is two-dimensional computational modelling of water flow in unsaturated soils consisting of weakly conductive disconnected inclusions embedded in a highly conductive connected matrix. When the inclusions are small, a two-scale Richards’ equation-based model has been proposed in the literature taking the form of an equation with effective parameters governing the macroscopic flow coupled with a microscopic equation, defined at each point in the macroscopic domain, governing the flow in the inclusions. This paper is devoted to a number of advances in the numerical implementation of this model. Namely, by treating the micro-scale as a two-dimensional problem, our solution approach based on a control volume finite element method can be applied to irregular inclusion geometries, and, if necessary, modified to account for additional phenomena (e.g. imposing the macroscopic gradient on the micro-scale via a linear approximation of the macroscopic variable along the microscopic boundary). This is achieved with the help of an exponential integrator for advancing the solution in time. This time integration method completely avoids generation of the Jacobian matrix of the system and hence eases the computation when solving the two-scale model in a completely coupled manner. Numerical simulations are presented for a two-dimensional infiltration problem.
Resumo:
Background Different from other indicators of cardiac function, such as ejection fraction and transmitral early diastolic velocity, myocardial strain is promising to capture subtle alterations that result from early diseases of the myocardium. In order to extract the left ventricle (LV) myocardial strain and strain rate from cardiac cine-MRI, a modified hierarchical transformation model was proposed. Methods A hierarchical transformation model including the global and local LV deformations was employed to analyze the strain and strain rate of the left ventricle by cine-MRI image registration. The endocardial and epicardial contour information was introduced to enhance the registration accuracy by combining the original hierarchical algorithm with an Iterative Closest Points using Invariant Features algorithm. The hierarchical model was validated by a normal volunteer first and then applied to two clinical cases (i.e., the normal volunteer and a diabetic patient) to evaluate their respective function. Results Based on the two clinical cases, by comparing the displacement fields of two selected landmarks in the normal volunteer, the proposed method showed a better performance than the original or unmodified model. Meanwhile, the comparison of the radial strain between the volunteer and patient demonstrated their apparent functional difference. Conclusions The present method could be used to estimate the LV myocardial strain and strain rate during a cardiac cycle and thus to quantify the analysis of the LV motion function.
Resumo:
Introduction: Ultrasmall superparamagnetic iron oxide (USPIO)-enhanced MRI has been shown to be a useful modality to image activated macrophages in vivo, which are principally responsible for plaque inflammation. This study determined the optimum imaging time-window to detect maximal signal change post-USPIO infusion using T1-weighted (T1w), T2*- weighted (T2*w) and quantitative T2*(qT 2*) imaging. Methods: Six patients with an asymptomatic carotid stenosis underwent high resolution T1w, T2*w and qT2*MR imaging of their carotid arteries at 1.5 T. Imaging was performed before and at 24, 36, 48, 72 and 96 h after USPIO (Sinerem™, Guerbet, France) infusion. Each slice showing atherosclerotic plaque was manually segmented into quadrants and signal changes in each quadrant were fitted to an exponential power function to model the optimum time for post-infusion imaging. Results: The power function determining the mean time to convergence for all patients was 46, 41 and 39 h for the T1w, T 2*w and qT2*sequences, respectively. When modelling each patient individually, 90% of the maximum signal intensity change was observed at 36 h for three, four and six patients on T1w, T 2*w and qT2*, respectively. The rates of signal change decrease after this period but signal change was still evident up to 96 h. Conclusion: This study showed that a suitable imaging window for T 1w, T2*w and qT2*signal changes post-USPIO infusion was between 36 and 48 h. Logistically, this would be convenient in bringing patients back for one post-contrast MRI, but validation is required in a larger cohort of patients.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
The Bernoulli/exponential target process is considered. Such processes have been found useful in modelling the search for active compounds in pharmaceutical research. An inequality is presented which improves a result of Gittins (1989), thus providing a better approximation to the Gittins indices which define the optimal search policy.
Resumo:
We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
The paper studies stochastic approximation as a technique for bias reduction. The proposed method does not require approximating the bias explicitly, nor does it rely on having independent identically distributed (i.i.d.) data. The method always removes the leading bias term, under very mild conditions, as long as auxiliary samples from distributions with given parameters are available. Expectation and variance of the bias-corrected estimate are given. Examples in sequential clinical trials (non-i.i.d. case), curved exponential models (i.i.d. case) and length-biased sampling (where the estimates are inconsistent) are used to illustrate the applications of the proposed method and its small sample properties.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).
Resumo:
For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.
Lost in space: The place of the architectural milieu in the aetiology and treatment of schizophrenia
Resumo:
Purpose – Psychological and epidemiological literature suggests that the built environment plays both causal and therapeutic roles in schizophrenia, but what are the implications for designers? The purpose of this paper is to focus on the role the built environment plays in psycho‐environmental dynamics, in order that negative effects can be avoided and beneficial effects emphasised in architectural design. Design/methodology/approach – The approach taken is a translational exploration of the dynamics between the built environment and psychotic illness, using primary research from disciplines as diverse as epidemiology, neurology and psychology. Findings – The built environment is conceived as being both an agonist and as an antagonist for the underlying processes that present as psychosis. The built environment is implicated through several means, through the opportunities it provides. These may be physical, narrative, emotional, hedonic or personal. Some opportunities may be negative, and others positive. The built environment is also an important source of unexpected aesthetic stimulation, yet in psychotic illnesses, aesthetic sensibilities characteristically suffer from deterioration. Research limitations/implications – The findings presented are based on research that is largely translated from very different fields of enquiry. Whilst findings are cogent and logical, much of the support is correlational rather than empirical. Social implications – The WHO claims that schizophrenia destroys 24 million lives worldwide, with an exponential effect on human and financial capital. Because evidence implicates the built environment, architectural and urban designers may have a role to play in reducing the human costs wrought by the illness. Originality/value – Never before has architecture been so explicitly implicated as a cause of mental illness. This paper was presented to the Symposium of Mental Health Facility Design, and is essential reading for anyone involved in designing for improved mental health.