957 resultados para log-on
Resumo:
Purpose Contrast adaptation has been speculated to be an error signal for emmetropization. Myopic children exhibit higher contrast adaptation than emmetropic children. This study aimed to determine whether contrast adaptation varies with the type of text viewed by emmetropic and myopic young adults. Methods Baseline contrast sensitivity was determined in 25 emmetropic and 25 spectacle-corrected myopic young adults for 0.5, 1.2, 2.7, 4.4, and 6.2 cycles per degree (cpd) horizontal sine wave gratings. The adults spent periods looking at a 6.2 cpd high-contrast horizontal grating and reading lines of English and Chinese text (these texts comprised 1.2 cpd row and 6 cpd stroke frequencies). The effects of these near tasks on contrast sensitivity were determined, with decreases in sensitivity indicating contrast adaptation. Results Contrast adaptation was affected by the near task (F2,672 = 43.0; P < 0.001). Adaptation was greater for the grating task (0.13 ± 0.17 log unit, averaged across all frequencies) than reading tasks, but there was no significant difference between the two reading tasks (English 0.05 ± 0.13 log unit versus Chinese 0.04 ± 0.13 log unit). The myopic group showed significantly greater adaptation (by 0.04, 0.04, and 0.05 log units for English, Chinese, and grating tasks, respectively) than the emmetropic group (F1,48 = 5.0; P = 0.03). Conclusions In young adults, reading Chinese text induced similar contrast adaptation as reading English text. Myopes exhibited greater contrast adaptation than emmetropes. Contrast adaptation, independent of text type, might be associated with myopia development.
Resumo:
An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.
Resumo:
Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. In order to judge on compliance of the business processing, the degree of behavioural deviation of a case, i.e., an observed execution sequence, is quantified with respect to a process model (referred to as fitness, or recall). Recently, different compliance measures have been proposed. Still, nearly all of them are grounded on state-based techniques and the trace equivalence criterion, in particular. As a consequence, these approaches have to deal with the state explosion problem. In this paper, we argue that a behavioural abstraction may be leveraged to measure the compliance of a process log – a collection of cases. To this end, we utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently. We propose different compliance measures based on these profiles, discuss the impact of noise in process logs on our measures, and show how diagnostic information on non-compliance is derived. As a validation, we report on findings of applying our approach in a case study with an international service provider.
Resumo:
Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an ffective input for travel time prediction. In this paper, the hazard based prediction odels are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS) for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Resumo:
In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.
Resumo:
Study region The Galilee and Eromanga basins are located in central Queensland, Australia. Both basins are components of the Great Artesian Basin which host some of the most significant groundwater resources in Australia. Study focus This study evaluates the influence of regional faults on groundwater flow in an aquifer/aquitard interbedded succession that form one of the largest Artesian Basins in the world. In order to assess the significance of regional faults as potential barriers or conduits to groundwater flow, vertical displacements of the major aquifers and aquitards were studied at each major fault and the general hydraulic relationship of units that are juxtaposed by the faults were considered. A three-dimensional (3D) geological model of the Galilee and Eromanga basins was developed based on integration of well log data, seismic surfaces, surface geology and elevation data. Geological structures were mapped in detail and major faults were characterised. New hydrological insights for the region Major faults that have been described in previous studies have been confirmed within the 3D geological model domain and a preliminary assessment of their hydraulic significance has been conducted. Previously unknown faults such as the Thomson River Fault (herein named) have also been identified in this study.
Resumo:
This paper presents a technique for the automated removal of noise from process execution logs. Noise is the result of data quality issues such as logging errors and manifests itself in the form of infrequent process behavior. The proposed technique generates an abstract representation of an event log as an automaton capturing the direct follows relations between event labels. This automaton is then pruned from arcs with low relative frequency and used to remove from the log those events not fitting the automaton, which are identified as outliers. The technique has been extensively evaluated on top of various auto- mated process discovery algorithms using both artificial logs with different levels of noise, as well as a variety of real-life logs. The results show that the technique significantly improves the quality of the discovered process model along fitness, appropriateness and simplicity, without negative effects on generalization. Further, the technique scales well to large and complex logs.
Resumo:
This paper addresses the problem of identifying and explaining behavioral differences between two business process event logs. The paper presents a method that, given two event logs, returns a set of statements in natural language capturing behavior that is present or frequent in one log, while absent or infrequent in the other. This log delta analysis method allows users to diagnose differences between normal and deviant executions of a process or between two versions or variants of a process. The method relies on a novel approach to losslessly encode an event log as an event structure, combined with a frequency-enhanced technique for differencing pairs of event structures. A validation of the proposed method shows that it accurately diagnoses typical change patterns and can explain differences between normal and deviant cases in a real-life log, more compactly and precisely than previously proposed methods.
Resumo:
Information from the full diffusion tensor (DT) was used to compute voxel-wise genetic contributions to brain fiber microstructure. First, we designed a new multivariate intraclass correlation formula in the log-Euclidean framework. We then analyzed used the full multivariate structure of the tensor in a multivariate version of a voxel-wise maximum-likelihood structural equation model (SEM) that computes the variance contributions in the DTs from genetic (A), common environmental (C) and unique environmental (E) factors. Our algorithm was tested on DT images from 25 identical and 25 fraternal twin pairs. After linear and fluid registration to a mean template, we computed the intraclass correlation and Falconer's heritability statistic for several scalar DT-derived measures and for the full multivariate tensors. Covariance matrices were found from the DTs, and inputted into SEM. Analyzing the full DT enhanced the detection of A and C effects. This approach should empower imaging genetics studies that use DTI.
Resumo:
For Adorno writing in 1953, Hollywood cinema was a medium of “regression” based on infantile wish fulfillment manufactured by the industrial repetition of the filmic image that he called a modern “hieroglyphics”—like the archaic language of pictures in Ancient Egypt, which guaranteed immortality after death in Egyptian burial rites. From that 1953 essay Prolog zum Fernsehen to Das Schema der Massenkultur in 1981, Adorno likened film frames to cultural ideograms: What he called the filmic “language of images” (Bildersprache) constituted a Hieroglyphenschrift that visualised forbidden sexual impulses and ideations of death and domination in the unconscious of the mass spectator. In his famous passage he writes, “As image, the image-writing (Bilderschrift) is a medium of regression, where the producer and consumer coincide; as writing, film resurrects the archaic images of modernity.” In other words, cinema takes the spectator on a journey into his unconscious in order to control him from within. It works, because the spectator begins to believe the film is speaking to him in his very own image-language (the unconscious), making him do and buy whatever capitalism demands. Modernity for Adorno is precisely the instrumentalisation of the collective unconscious through the mediatic images of the culture industry.
Resumo:
Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)
Resumo:
A k-cube (or ``a unit cube in k dimensions'') is defined as the Cartesian product R-1 x . . . x R-k where R-i (for 1 <= i <= k) is an interval of the form [a(i), a(i) + 1] on the real line. The k-cube representation of a graph G is a mapping of the vertices of G to k-cubes such that the k-cubes corresponding to two vertices in G have a non-empty intersection if and only if the vertices are adjacent. The cubicity of a graph G, denoted as cub(G), is defined as the minimum dimension k such that G has a k-cube representation. An interval graph is a graph that can be represented as the intersection of intervals on the real line - i. e., the vertices of an interval graph can be mapped to intervals on the real line such that two vertices are adjacent if and only if their corresponding intervals overlap. We show that for any interval graph G with maximum degree Delta, cub(G) <= inverted right perpendicular log(2) Delta inverted left perpendicular + 4. This upper bound is shown to be tight up to an additive constant of 4 by demonstrating interval graphs for which cubicity is equal to inverted right perpendicular log(2) Delta inverted left perpendicular.
Resumo:
A mature Caribbean pine (Pinus caribaea var. hondurensis) silviculture experiment provided initial square spacing treatments of 1.8 m2, 2.4 m2, 3.0 m2 and 3.6 m2 (equal to 3088, 1737, 1111 and 772 stems/ha) that were thinned at age 10 years to 600, 400 and 200 stems/ha, retaining an unthinned control for each initial spacing. The trial was destructively sampled at age of 28 years and discs taken along 8 various stem heights were analysed for variation in basic density and SilviScan wood properties. In addition, the logs from ten stocking × thinning treatments were processed in a sawing study. Results indicate thinning effects were generally more pronounced than initial spacing effects. Fast growing trees produced wood with significantly higher average wood densities and higher average stiffness values. Detailed SilviScan densitometry results obtained radially and at various stem heights enabled construction of tree maps for wood properties, providing insights into the variation in juvenile to mature wood proportion across the initial and post-thinning stocking treatments studied. Dried dressed recovery was strongly related to tree size, and log value decreased consistently from butt to top logs across all treatments. The estimated value per hectare was highest in unthinned plots due to values being multiplied by high stem numbers per hectare. However, a complete economic analysis considering all cost structures is required to investigate the optimal silviculture to maximise economic returns to growers and processors. Improved understanding of the relationship between initial spacing, post-thinning stocking and wood and end-product quality should help to customize future forest management strategies required to produce better quality wood and wood products.
Resumo:
BACKGROUND: Endometriosis is a common disease with a heritable component. The collaborative International Endogene Study consists of two data sets (Oxford and Australia) comprising 1176 families with multiple affected. The aim was to investigate whether the apparent concentration of cases in a proportion of families could be explained by one or more rare variants with (near-)Mendelian autosomal inheritance. METHODS AND RESULTS: Linkage analyses (aimed at finding chromosomal regions harbouring disease-predisposing genes) were conducted in families with three or more affected (Oxford: n = 52; Australia: n = 196). In the Oxford data set, a non-parametric linkage score (Kong & Cox (K&C) Log of ODds (LOD)) of 3.52 was observed on chromosome 7p (genome-wide significance P = 0.011). A parametric MOD score (equal to maximum LOD maximized over 357 possible inheritance models) of 3.89 was found at 65.72 cM (D7S510) for a dominant model with reduced penetrance. After including the Australian data set, the non-parametric K&C LOD of the combined data set was 1.46 at 57.3 cM; the parametric analysis found an MOD score of 3.30 at D7S484 (empirical significance: P = 0.035) for a recessive model with high penetrance. Critical recombinant analysis narrowed the probable region of linkage down to overlapping 6.4 Mb and 11 Mb intervals containing 48 and 96 genes, respectively. CONCLUSIONS: This is the first report to suggest that there may be one or more high-penetrance susceptibility loci for endometriosis with (near-)Mendelian inheritance.