82 resultados para polygon fault
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.
Resumo:
OBJECTIVE To describe heterogeneity of HIV prevalence among pregnant women in Hlabisa health district, South Africa and to correlate this with proximity of homestead to roads. METHODS HIV prevalence measured through anonymous surveillance among pregnant women and stratified by local village clinic. Polygons were created around each clinic, assuming women attend the clinic nearest their home. A geographical information system (GIS) calculated the mean distance from homesteads in each clinic catchment to nearest primary (1 degrees) and to nearest primary or secondary (2 degrees) road. RESULTS We found marked HIV heterogeneity by clinic catchment (range 19-31% (P < 0.001). A polygon plot demonstrated lower HIV prevalence in catchments remote from 1 degrees roads. Mean distance from homesteads to nearest 1 degrees or 2 degrees road varied by clinic catchment from 1623 to 7569 m. The mean distance from homesteads to a 1 degrees or 2 degrees road for each clinic catchment was strongly correlated with HIV prevalence (r = 0.66; P = 0.002). CONCLUSIONS The substantial HIV heterogeneity in this district is closely correlated with proximity to a 1 degrees or 2 degrees road. GIS is a powerful tool to demonstrate and to start to analyse this observation. Further research is needed to better understand this relationship both at ecological and individual levels, and to develop interventions to reduce the spread of HIV infection.
Resumo:
Examples from the Murray-Darling basin in Australia are used to illustrate different methods of disaggregation of reconnaissance-scale maps. One approach for disaggregation revolves around the de-convolution of the soil-landscape paradigm elaborated during a soil survey. The descriptions of soil ma units and block diagrams in a soil survey report detail soil-landscape relationships or soil toposequences that can be used to disaggregate map units into component landscape elements. Toposequences can be visualised on a computer by combining soil maps with digital elevation data. Expert knowledge or statistics can be used to implement the disaggregation. Use of a restructuring element and k-means clustering are illustrated. Another approach to disaggregation uses training areas to develop rules to extrapolate detailed mapping into other, larger areas where detailed mapping is unavailable. A two-level decision tree example is presented. At one level, the decision tree method is used to capture mapping rules from the training area; at another level, it is used to define the domain over which those rules can be extrapolated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This study provided information about how individual workers perceive, describe and interpret episodes of problematic communication. Sixteen full-time workers (5 males, 11 females) were interviewed in depth about specific incidents of problematic communication within their workplace. Their descriptions of the attributed causes of the incidents were coded using a categorisation scheme developed from Coupland, Wieman, and Giles' (1991) model of sources of problematic communication. Communication problems were most commonly attributed to individual deficiency and group membership, although there were differences depending on the direction of communication. The most negative attributions (to personality flaws, to lack of skills, and to negative stereotypes of the outgroup) were most commonly applied by individuals to their supervisors, whilst attributions applied to co-workers and subordinates tended to be less negative, or even positive in some instances (where individuals attributed the fault to themselves). Overall, results highlighted distinctions between the perceptions of communication problems with supervisors and with subordinates, and are interpreted with reference to social identity theory.
Resumo:
In order to investigate the effect of material anisotropy on convective instability of three-dimensional fluid-saturated faults, an exact analytical solution for the critical Rayleigh number of three-dimensional convective flow has been obtained. Using this critical Rayleigh number, effects of different permeability ratios and thermal conductivity ratios on convective instability of a vertically oriented three-dimensional fault have been examined in detail. It has been recognized that (1) if the fault material is isotropic in the horizontal direction, the horizontal to vertical permeability ratio has a significant effect on the critical Rayleigh number of the three-dimensional fault system, but the horizontal to vertical thermal conductivity ratio has little influence on the convective instability of the system, and (2) if the fault material is isotropic in the fault plane, the thermal conductivity ratio of the fault normal to plane has a considerable effect on the critical Rayleigh number of the three-dimensional fault system, but the effect of the permeability ratio of the fault normal to plane on the critical Rayleigh number of three-dimensional convective flow is negligible.
Resumo:
Extension of overthickened continental crust is commonly characterized by an early core complex stage of extension followed by a later stage of crustal-scale rigid block faulting. These two stages are clearly recognized during the extensional destruction of the Alpine orogen in northeast Corsica, where rigid block faulting overprinting core complex formation eventually led to crustal separation and the formation of a new oceanic backarc basin (the Ligurian Sea). Here we investigate the geodynamic evolution of continental extension by using a novel, fully coupled thermomechanical numerical model of the continental crust. We consider that the dynamic evolution is governed by fault weakening, which is generated by the evolution of the natural-state variables (i.e., pressure, deviatoric stress, temperature, and strain rate) and their associated energy fluxes. Our results show the appearance of a detachment layer that controls the initial separation of the brittle crust on characteristic listric faults, and a core complex formation that is exhuming strongly deformed rocks of the detachment zone and relatively undeformed crustal cores. This process is followed by a transitional period, characterized by an apparent tectonic quiescence, in which deformation is not localized and energy stored in the upper crust is transferred downward and causes self-organized mobilization of the lower crust. Eventually, the entire crust ruptures on major crosscutting faults, shifting the tectonic regime from core complex formation to wholesale rigid block faulting.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.