817 resultados para Traffic clustering
Resumo:
No abstract
Resumo:
Wheel traffic can lead to compaction and degradation of soil physical properties. This study, as part of a study of controlled traffic farming, assessed the impact of compaction from wheel traffic on soil that had not been trafficked for 5 years. A tractor of 40 kN rear axle weight was used to apply traffic at varying wheelslip on a clay soil with varying residue cover to simulate effects of traffic typical of grain production operations in the northern Australian grain belt. A rainfall simulator was used to determine infiltration characteristics. Wheel traffic significantly reduced time to ponding, steady infiltration rate, and total infiltration compared with non-wheeled soil, with or without residue cover. Non-wheeled soil had 4-5 times greater steady infiltration rate than wheeled soil, irrespective of residue cover. Wheelslip greater than 10% further reduced steady infiltration rate and total infiltration compared with that measured for self-propulsion wheeling (3% wheelslip) under residue-protected conditions. Where there was no compaction from wheel traffic, residue cover had a greater effect on infiltration capacity, with steady infiltration rate increasing proportionally with residue cover (R-2 = 0.98). Residue cover, however, had much less effect on infiltration when wheeling was imposed. These results demonstrated that the infiltration rate for the non-wheeled soil under a controlled traffic zero-till system was similar to that of virgin soil. However, when the soil was wheeled by a medium tractor wheel, infiltration rate was reduced to that of long-term cropped soil. These results suggest that wheel traffic, rather than tillage and cropping, might be the major factor governing infiltration. The exclusion of wheel traffic under a controlled traffic farming system, combined with conservation tillage, provides a way to enhance the sustainability of cropping this soil for improved infiltration, increased plant-available water, and reduced runoff-driven soil erosion.
Resumo:
Traffic and tillage effects on runoff and crop performance on a heavy clay soil were investigated over a period of 4 years. Tillage treatments and the cropping program were representative of broadacre grain production practice in northern Australia, and a split-plot design used to isolate traffic effects. Treatments subject to zero, minimum, and stubble mulch tillage each comprised pairs of 90-m 2 plots, from which runoff was recorded. A 3-m-wide controlled traffic system allowed one of each pair to be maintained as a non-wheeled plot, while the total surface area of the other received a single annual wheeling treatment from a working 100-kW tractor. Rainfall/runoff hydrographs demonstrate that wheeling produced a large and consistent increase in runoff, whereas tillage produced a smaller increase. Treatment effects were greater on dry soil, but were still maintained in large and intense rainfall events on wet soil. Mean annual runoff from wheeled plots was 63 mm (44%) greater than that from controlled traffic plots, whereas runoff from stubble mulch tillage plots was 38 mm (24%) greater than that from zero tillage plots. Traffic and tillage effects appeared to be cumulative, so the mean annual runoff from wheeled stubble mulch tilled plots, representing conventional cropping practice, was more than 100 mm greater than that from controlled traffic zero tilled plots, representing best practice. This increased infiltration was reflected in an increased yield of 16% compared with wheeled stubble mulch. Minimum tilled plots demonstrated a characteristic midway between that of zero and stubble mulch tillage. The results confirm that unnecessary energy dissipation in the soil during the traction process that normally accompanies tillage has a major negative effect on infiltration and crop productivity. Controlled traffic farming systems appear to be the only practicable solution to this problem.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
Activation of macrophages with lipopolysaccharide (LPS) induces the rapid synthesis and secretion of proinflammatory cytokines, such as tumor necrosis factor (TNFalpha), for priming the immune response [1, 2]. TNFalpha plays a key role in inflammatory disease [3]; yet, little is known of the intracellular trafficking events leading to its secretion. In order to identify molecules involved in this secretory pathway, we asked whether any of the known trafficking proteins are regulated by LPS. We found that the levels of SNARE proteins were rapidly and significantly up- or downregulated during macrophage activation. A subset of t-SNAREs (Syntaxin 4/SNAP23/Munc18c) known to control regulated exocytosis in other cell types [4, 5] was substantially increased by LPS in a temporal pattern coinciding with peak TNFalpha secretion. Syntaxin 4 formed a complex with Munc18c at the cell surface of macrophages. Functional studies involving the introduction of Syntaxin 4 cDNA or peptides into macrophages implicate this t-SNARE in a rate-limiting step of TNFalpha secretion and in membrane ruffling during macrophage activation. We conclude that in macrophages, SNAREs are regulated in order to accommodate the rapid onset of cytokine secretion and for membrane traffic associated with the phenotypic changes of immune activation. This represents a novel regulatory role for SNAREs in regulated secretion and in macrophage-mediated host defense.
Resumo:
In this study, the results of chemical concentrations inside and outside of a Lisbon (Portugal) traffic tunnel were compared, during one week. They were obtained by Instrumental Neutron Activation Analysis (INAA). The tunnel values largely exceed the Air Ambient legislated values and the Pearson Correlations Coefficients point out to soil re-suspension/dispersed road dust (As, Ce, Eu, Hf, Fe, Mo, Sc, Zn), traffic-markers (Ba, Cr), tire wear (Cr, Zn), break wear (Fe, Zn, Ba, Cu, Sb), exhaust and motor oil (Zn) and sea-spray (Br, Na). On all days these elements inside the tunnel were more enriched than outside; significant statistical differences were found for Co (p=0.005), Br (p=0.008), Zn (p=0.01) and Sb (p=0.005), while enrichment factors of As and Sc are statistically identical. The highest values were found for As, Br, Zn and Sb, for both inside and outside the tunnel.
Resumo:
OBJECTIVE: To estimate the incidence rate of type 1 diabetes in the urban area of Santiago, Chile, from March 21, 1997 to March 20, 1998, and to assess the spatio-temporal clustering of cases during that period. METHODS: All sixty-one incident cases were located temporally (day of diagnosis) and spatially (place of residence) in the area of study. Knox's method was used to assess spatio-temporal clustering of incident cases. RESULTS: The overall incidence rate of type 1 diabetes was 4.11 cases per 100,000 children aged less than 15 years per year (95% confidence interval: 3.06--5.14). The incidence rate seems to have increased since the last estimate of the incidence calculated for the years 1986--1992 in the metropolitan region of Santiago. Different combinations of space-time intervals have been evaluated to assess spatio-temporal clustering. The smallest p-value was found for the combination of critical distances of 750 meters and 60 days (uncorrected p-value = 0.048). CONCLUSIONS: Although these are preliminary results regarding space-time clustering in Santiago, exploratory analysis of the data method would suggest a possible aggregation of incident cases in space-time coordinates.
Resumo:
A definition of medium voltage (MV) load diagrams was made, based on the data base knowledge discovery process. Clustering techniques were used as support for the agents of the electric power retail markets to obtain specific knowledge of their customers’ consumption habits. Each customer class resulting from the clustering operation is represented by its load diagram. The Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) were applied to an electricity consumption data from a utility client’s database in order to form the customer’s classes and to find a set of representative consumption patterns. The WEACS approach is a clustering ensemble combination approach that uses subsampling and that weights differently the partitions in the co-association matrix. As a complementary step to the WEACS approach, all the final data partitions produced by the different variations of the method are combined and the Ward Link algorithm is used to obtain the final data partition. Experiment results showed that WEACS approach led to better accuracy than many other clustering approaches. In this paper the WEACS approach separates better the customer’s population than Two-step clustering algorithm.
Resumo:
With the electricity market liberalization, the distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity consumers. A fair insight on the consumers’ behavior will permit the definition of specific contract aspects based on the different consumption patterns. In order to form the different consumers’ classes, and find a set of representative consumption patterns we use electricity consumption data from a utility client’s database and two approaches: Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) for combining partitions in a clustering ensemble. While EAC uses a voting mechanism to produce a co-association matrix based on the pairwise associations obtained from N partitions and where each partition has equal weight in the combination process, the WEACS approach uses subsampling and weights differently the partitions. As a complementary step to the WEACS approach, we combine the partitions obtained in the WEACS approach with the ALL clustering ensemble construction method and we use the Ward Link algorithm to obtain the final data partition. The characterization of the obtained consumers’ clusters was performed using the C5.0 classification algorithm. Experiment results showed that the WEACS approach leads to better results than many other clustering approaches.
Resumo:
The present research paper presents five different clustering methods to identify typical load profiles of medium voltage (MV) electricity consumers. These methods are intended to be used in a smart grid environment to extract useful knowledge about customer’s behaviour. The obtained knowledge can be used to support a decision tool, not only for utilities but also for consumers. Load profiles can be used by the utilities to identify the aspects that cause system load peaks and enable the development of specific contracts with their customers. The framework presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partition, which is supported by cluster validity indices. The process ends with the analysis of the discovered knowledge. To validate the proposed framework, a case study with a real database of 208 MV consumers is used.
Resumo:
The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.
Resumo:
This paper presents a novel moving target indicator which is selective with respect to a direction of interest. Preliminary results indicate that the obtained selectivity may have high interest in civil traffic monitoring using single channel SAR data.