925 resultados para Generalized Convexity
Resumo:
Delays are an important feature in temporal models of genetic regulation due to slow biochemical processes, such as transcription and translation. In this paper, we show how to model intrinsic noise effects in a delayed setting by either using a delay stochastic simulation algorithm (DSSA) or, for larger and more complex systems, a generalized Binomial τ-leap method (Bτ-DSSA). As a particular application, we apply these ideas to modeling somite segmentation in zebra fish across a number of cells in which two linked oscillatory genes (her1 and her7) are synchronized via Notch signaling between the cells.
Resumo:
High levels of sitting have been linked with poor health outcomes. Previously a pragmatic MTI accelerometer data cut-point (100 count/min-1) has been used to estimate sitting. Data on the accuracy of this cut-point is unavailable. PURPOSE: To ascertain whether the 100 count/min-1 cut-point accurately isolates sitting from standing activities. METHODS: Participants fitted with an MTI accelerometer were observed performing a range of sitting, standing, light & moderate activities. 1-min epoch MTI data were matched to observed activities, then re-categorized as either sitting or not using the 100 count/min-1 cut-point. Self-report demographics and current physical activity were collected. Generalized estimating equation for repeated measures with a binary logistic model analyses (GEE), corrected for age, gender and BMI, were conducted to ascertain the odds of the MTI data being misclassified. RESULTS: Data were from 26 healthy subjects (8 men; 50% aged <25 years; mean BMI (SD) 22.7(3.8)m/kg2). MTI sitting and standing data mode was 0 count/min-1, with 46% of sitting activities and 21% of standing activities recording 0 count/min-1. The GEE was unable to accurately isolate sitting from standing activities using the 100 count/min-1 cut-point, since all sitting activities were incorrectly predicted as standing (p=0.05). To further explore the sensitivity of MTI data to delineate sitting from standing, the upper 95% confidence interval of the mean for the sitting activities (46 count/min-1) was used to re-categorise the data; this resulted in the GEE correctly classifying 49% of sitting, and 69% of standing activities. Using the 100 count/min-1 cut-point the data were re-categorised into a combined ‘sit/stand’ category and tested against other light activities: 88% of sit/stand and 87% of light activities were accurately predicted. Using Freedson’s moderate cut-point of 1952 count/min-1 the GEE accurately predicted 97% of light vs. 90% of moderate activities. CONCLUSION: The distributions of MTI recorded sitting and standing data overlap considerably, as such the 100 count/min -1 cut-point did not accurately isolate sitting from other static standing activities. The 100 count/min -1 cut-point more accurately predicted sit/stand vs. other movement orientated activities.
Resumo:
Abstract OBJECTIVE: To assess the psychometric properties and health correlates of the Geriatric Anxiety Inventory (GAI) in a cohort of Australian community-residing older women. METHOD: Cross-sectional study of a population-based cohort of women aged 60 years and over (N = 286). RESULTS: The GAI exhibited sound internal consistency and demonstrated good concurrent validity against the state half of the Spielberger State Trait Anxiety Inventory and the neuroticism domain of the NEO five-factor inventory. GAI score was significantly associated with self-reported sleep difficulties and perceived memory impairment, but not with age or cognitive function. Women with current DSM-IV Generalized Anxiety Disorder (GAD) had significantly higher GAI scores than women without such a history. In this cohort, the optimal cut-point to detect current GAD was 8/9. Although the GAI was designed to have few somatic items, women with a greater number of general medical problems or who rated their general health as worse had higher GAI scores. CONCLUSION: The GAI is a new scale designed specifically to measure anxiety in older people. In this Australian cohort of older women, the instrument had sound psychometric properties.
Resumo:
We develop a new analytical solution for a reactive transport model that describes the steady-state distribution of oxygen subject to diffusive transport and nonlinear uptake in a sphere. This model was originally reported by Lin (Journal of Theoretical Biology, 1976 v60, pp449–457) to represent the distribution of oxygen inside a cell and has since been studied extensively by both the numerical analysis and formal analysis communities. Here we extend these previous studies by deriving an analytical solution to a generalized reaction-diffusion equation that encompasses Lin’s model as a particular case. We evaluate the solution for the parameter combinations presented by Lin and show that the new solutions are identical to a grid-independent numerical approximation.
Resumo:
Objective: Older driver research has mostly focused on identifying that small proportion of older drivers who are unsafe. Little is known about how normal cognitive changes in aging affect driving in the wider population of adults who drive regularly. We evaluated the association of cognitive function and age, with driving errors. Method: A sample of 266 drivers aged 70 to 88 years were assessed on abilities that decline in normal aging (visual attention, processing speed, inhibition, reaction time, task switching) and the UFOV® which is a validated screening instrument for older drivers. Participants completed an on-road driving test. Generalized linear models were used to estimate the associations of cognitive factor with specific driving errors and number of errors in self-directed and instructor navigated conditions. Results: All errors types increased with chronological age. Reaction time was not associated with driving errors in multivariate analyses. A cognitive factor measuring Speeded Selective Attention and Switching was uniquely associated with the most errors types. The UFOV predicted blindspot errors and errors on dual carriageways. After adjusting for age, education and gender the cognitive factors explained 7% of variance in the total number of errors in the instructor navigated condition and 4% of variance in the self-navigated condition. Conclusion: We conclude that among older drivers errors increase with age and are associated with speeded selective attention particularly when that requires attending to the stimuli in the periphery of the visual field, task switching, errors inhibiting responses and visual discrimination. These abilities should be the target of cognitive training.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.
Resumo:
In the field of process mining, the use of event logs for the purpose of root cause analysis is increasingly studied. In such an analysis, the availability of attributes/features that may explain the root cause of some phenomena is crucial. Currently, the process of obtaining these attributes from raw event logs is performed more or less on a case-by-case basis: there is still a lack of generalized systematic approach that captures this process. This paper proposes a systematic approach to enrich and transform event logs in order to obtain the required attributes for root cause analysis using classical data mining techniques, the classification techniques. This approach is formalized and its applicability has been validated using both self-generated and publicly-available logs.
Resumo:
This paper investigates relationship between traffic conditions and the crash occurrence likelihood (COL) using the I-880 data. To remedy the data limitations and the methodological shortcomings suffered by previous studies, a multiresolution data processing method is proposed and implemented, upon which binary logistic models were developed. The major findings of this paper are: 1) traffic conditions have significant impacts on COL at the study site; Specifically, COL in a congested (transitioning) traffic flow is about 6 (1.6) times of that in a free flow condition; 2)Speed variance alone is not sufficient to capture traffic dynamics’ impact on COL; a traffic chaos indicator that integrates speed, speed variance, and flow is proposed and shows a promising performance; 3) Models based on aggregated data shall be interpreted with caution. Generally, conclusions obtained from such models shall not be generalized to individual vehicles (drivers) without further evidences using high-resolution data and it is dubious to either claim or disclaim speed kills based on aggregated data.
Resumo:
Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.
Resumo:
Objective: Preclinical and clinical data suggest that lipid biology is integral to brain development and neurodegeneration. Both aspects are proposed as being important in the pathogenesis of schizophrenia. The purpose of this paper is to examine the implications of lipid biology, in particular the role of essential fatty acids (EFA), for schizophrenia. Methods: Medline databases were searched from 1966 to 2001 followed by the crosschecking of references. Results: Most studies investigating lipids in schizophrenia described reduced EFA, altered glycerophospholipids and an increased activity of a calcium-independent phospholipase A2 in blood cells and in post-mortem brain tissue. Additionally, in vivo brain phosphorus-31 Magnetic Resonance Spectroscopy (31P-MRS) demonstrated lower phosphomonoesters (implying reduced membrane precursors) in first- and multi-episode patients. In contrast, phosphodiesters were elevated mainly in first-episode patients (implying increased membrane breakdown products), whereas inconclusive results were found in chronic patients. EFA supplementation trials in chronic patient populations with residual symptoms have demonstrated conflicting results. More consistent results were observed in the early and symptomatic stages of illness, especially if EFA with a high proportion of eicosapentaenoic acid was used. Conclusion: Peripheral blood cell, brain necropsy and 31P-MRS analysis reveal a disturbed lipid biology, suggesting generalized membrane alterations in schizophrenia. 31P-MRS data suggest increased membrane turnover at illness onset and persisting membrane abnormalities in established schizophrenia. Cellular processes regulating membrane lipid metabolism are potential new targets for antipsychotic drugs and might explain the mechanism of action of treatments such as eicosapentaenoic acid.
Resumo:
Generalized fractional partial differential equations have now found wide application for describing important physical phenomena, such as subdiffusive and superdiffusive processes. However, studies of generalized multi-term time and space fractional partial differential equations are still under development. In this paper, the multi-term time-space Caputo-Riesz fractional advection diffusion equations (MT-TSCR-FADE) with Dirichlet nonhomogeneous boundary conditions are considered. The multi-term time-fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0, 1], [1, 2] and [0, 2], respectively. These are called respectively the multi-term time-fractional diffusion terms, the multi-term time-fractional wave terms and the multi-term time-fractional mixed diffusion-wave terms. The space fractional derivatives are defined as Riesz fractional derivatives. Analytical solutions of three types of the MT-TSCR-FADE are derived with Dirichlet boundary conditions. By using Luchko's Theorem (Acta Math. Vietnam., 1999), we proposed some new techniques, such as a spectral representation of the fractional Laplacian operator and the equivalent relationship between fractional Laplacian operator and Riesz fractional derivative, that enabled the derivation of the analytical solutions for the multi-term time-space Caputo-Riesz fractional advection-diffusion equations. © 2012.
Resumo:
his paper formulates an edge-based smoothed conforming point interpolation method (ES-CPIM) for solid mechanics using the triangular background cells. In the ES-CPIM, a technique for obtaining conforming PIM shape functions (CPIM) is used to create a continuous and piecewise quadratic displacement field over the whole problem domain. The smoothed strain field is then obtained through smoothing operation over each smoothing domain associated with edges of the triangular background cells. The generalized smoothed Galerkin weak form is then used to create the discretized system equations. Numerical studies have demonstrated that the ES-CPIM possesses the following good properties: (1) ES-CPIM creates conforming quadratic PIM shape functions, and can always pass the standard patch test; (2) ES-CPIM produces a quadratic displacement field without introducing any additional degrees of freedom; (3) The results of ES-CPIM are generally of very high accuracy.
Resumo:
This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.
Resumo:
Road traffic accidents can be reduced by providing early warning to drivers through wireless ad hoc networks. When a vehicle detects an event that may lead to an imminent accident, the vehicle disseminates emergency messages to alert other vehicles that may be endangered by the accident. In many existing broadcast-based dissemination schemes, emergency messages may be sent to a large number of vehicles in the area and can be propagated to only one direction. This paper presents a more efficient context aware multicast protocol that disseminates messages only to endangered vehicles that may be affected by the emergency event. The endangered vehicles can be identified by calculating the interaction among vehicles based on their motion properties. To ensure fast delivery, the dissemination follows a routing path obtained by computing a minimum delay tree. The multicast protocol uses a generalized approach that can support any arbitrary road topology. The performance of the multicast protocol is compared with existing broadcast protocols by simulating chain collision accidents on a typical highway. Simulation results show that the multicast protocol outperforms the other protocols in terms of reliability, efficiency, and latency.