253 resultados para Subsistence Minimum


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Misperception of speed under low-contrast conditions has been identified as a possible contributor to motor vehicle crashes in fog. To test this hypothesis, we investigated the effects of reduced contrast on drivers’ perception and control of speed while driving under real-world conditions. Fourteen participants drove around a 2.85 km closed road course under three visual conditions: clear view and with two levels of reduced contrast created by diffusing filters on the windscreen and side windows. Three dependent measures were obtained, without view of the speedometer, on separate laps around the road course: verbal estimates of speed; adjustment of speed to instructed levels (25 to 70 km h-1); and estimation of minimum stopping distance. The results showed that drivers traveled more slowly under low-contrast conditions. Reduced contrast had little or no effect on either verbal judgments of speed or estimates of minimum stopping distance. Speed adjustments were significantly slower under low-contrast than clear conditions, indicating that, contrary to studies of object motion, drivers perceived themselves to be traveling faster under conditions of reduced contrast. Under real-world driving conditions, drivers’ ability to perceive and control their speed was not adversely affected by large variations in the contrast of their surroundings. These findings suggest that perceptions of self-motion and object motion involve neural processes that are differentially affected by variations in stimulus contrast as encountered in fog.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In total, 782 Escherichia coli strains originating from various host sources have been analyzed in this study by using a highly discriminatory single-nucleotide polymorphism (SNP) approach. A set of eight SNPs, with a discrimination value (Simpson's index of diversity [D]) of 0.96, was determined using the Minimum SNPs software, based on sequences of housekeeping genes from the E. coli multilocus sequence typing (MLST) database. Allele-specific real-time PCR was used to screen 114 E. coli isolates from various fecal sources in Southeast Queensland (SEQ). The combined analysis of both the MLST database and SEQ E. coli isolates using eight high-D SNPs resolved the isolates into 74 SNP profiles. The data obtained suggest that SNP typing is a promising approach for the discrimination of host-specific groups and allows for the identification of human-specific E. coli in environmental samples. However, a more diverse E. coli collection is required to determine animal- and environment-specific E. coli SNP profiles due to the abundance of human E. coli strains (56%) in the MLST database.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intrinsically photosensitive retinal ganglion cells (ipRGC) signal environmental light level to the central circadian clock and contribute to the pupil light reflex. It is unknown if ipRGC activity is subject to extrinsic (central) or intrinsic (retinal) network-mediated circadian modulation during light entrainment and phase shifting. Eleven younger persons (18–30 years) with no ophthalmological, medical or sleep disorders participated. The activity of the inner (ipRGC) and outer retina (cone photoreceptors) was assessed hourly using the pupil light reflex during a 24 h period of constant environmental illumination (10 lux). Exogenous circadian cues of activity, sleep, posture, caffeine, ambient temperature, caloric intake and ambient illumination were controlled. Dim-light melatonin onset (DLMO) was determined from salivary melatonin assay at hourly intervals, and participant melatonin onset values were set to 14 h to adjust clock time to circadian time. Here we demonstrate in humans that the ipRGC controlled post-illumination pupil response has a circadian rhythm independent of external light cues. This circadian variation precedes melatonin onset and the minimum ipRGC driven pupil response occurs post melatonin onset. Outer retinal photoreceptor contributions to the inner retinal ipRGC driven post-illumination pupil response also show circadian variation whereas direct outer retinal cone inputs to the pupil light reflex do not, indicating that intrinsically photosensitive (melanopsin) retinal ganglion cells mediate this circadian variation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developments in school education in Australia over the past decade have witnessed the rise of national efforts to reform curriculum, assessment and reporting. Constitutionally the power to decide on curriculum matters still resides with the States. Higher stakes in assessment, brought about by national testing and international comparative analyses of student achievement data, have challenged State efforts to maintain the emphasis on assessment to promote learning while fulfilling accountability demands. In this article lessons from the Queensland experience indicate that it is important to build teachers' assessment capacity and their assessment literacy for the promotion of student learning. It is argued that teacher assessment can be a source of dependable results through moderation practice. The Queensland Studies Authority has recognised and supported the development of teacher assessment and moderation practice in the context of standards-driven, national reform. Recent research findings explain how the focus on learning can be maintained by avoiding an over-interpretation of test results in terms of innate ability and limitations and by encouraging teachers to adopt more tailored diagnosis of assessment data to address equity through focus on achievement for all. Such efforts are challenged as political pressures related to the Australian government’s implementation of national testing and national partnership funding arrangements tied to the performance of students at or below minimum standards become increasingly apparent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To obtain minimum time or minimum energy trajectories for robots it is necessary to employ planning methods which adequately consider the platform’s dynamic properties. A variety of sampling, graph-based or local receding-horizon optimisation methods have previously been proposed. These typically use simplified kino-dynamic models to avoid the significant computational burden of solving this problem in a high dimensional state-space. In this paper we investigate solutions from the class of pseudospectral optimisation methods which have grown in favour amongst the optimal control community in recent years. These methods have high computational efficiency and rapid convergence properties. We present a practical application of such an approach to the robot path planning problem to provide a trajectory considering the robot’s dynamic properties. We extend the existing literature by augmenting the path constraints with sensed obstacles rather than predefined analytical functions to enable real world application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planning on utilization of train-set is one of the key tasks of transport organization for passenger dedicated railway in China. It also has strong relationships with timetable scheduling and operation plans at a station. To execute such a task in a railway hub pooling multiple railway lines, the characteristics of multiple routing for train-set is discussed in term of semicircle of train-sets' turnover. In programming the described problem, the minimum dwell time is selected as the objectives with special derive constraints of the train-set's dispatch, the connecting conditions, the principle of uniqueness for train-sets, and the first plus for connection in the same direction based on time tolerance σ. A compact connection algorithm based on time tolerance is then designed. The feasibility of the model and the algorithm is proved by the case study. The result indicates that the circulation model and algorithm about multiple routing can deal with the connections between the train-sets of multiple directions, and reduce the train's pulling in or leaving impact on the station's throat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tangible programming elements offer the dynamic and programmable properties of a computer without the complexity introduced by the keyboard, mouse and screen. This paper explores the extent to which programming skills are used by children during interactions with a set of tangible programming elements: the Electronic Blocks. An evaluation of the Electronic Blocks indicates that children become heavily engaged with the blocks, and learn simple programming with a minimum of adult support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we identify the origins of stop-and-go (or slow-and-go) driving and measure microscopic features of their propagations by analyzing vehicle trajectories via Wavelet Transform. Based on 53 oscillation cases analyzed, we find that oscillations can be originated by either lane-changing maneuvers (LCMs) or car-following behavior (CF). LCMs were predominantly responsible for oscillation formations in the absence of considerable horizontal or vertical curves, whereas oscillations formed spontaneously near roadside work on an uphill segment. Regardless of the trigger, the features of oscillation propagations were similar in terms of propagation speed, oscillation duration, and amplitude. All observed cases initially exhibited a precursor phase, in which slow-and-go motions were localized. Some of them eventually transitioned into a well developed phase, in which oscillations propagated upstream in queue. LCMs were primarily responsible for the transition, although some transitions occurred without LCMs. Our findings also suggest that an oscillation has a regressive effect on car following behavior: a deceleration wave of an oscillation affects a timid driver (with larger response time and minimum spacing) to become less timid and an aggressive driver less aggressive, although this change may be short-lived. An extended framework of Newell’s CF is able to describe the regressive effects with two additional parameters with reasonable accuracy, as verified using vehicle trajectory data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soil organic carbon (C) sequestration rates based on the Intergovernmental Panel for Climate Change (IPCC) methodology were combined with local economic data to simulate the economic potential for C sequestration in response to conservation tillage in the six agro-ecological zones within the Southern Region of the Australian grains industry. The net C sequestration rate over 20 years for the Southern Region (which includes discounting for associated greenhouse gases) is estimated to be 3.6 or 6.3 Mg C/ha after converting to either minimum or no-tillage practices, respectively, with no-till practices estimated to return 75% more carbon on average than minimum tillage. The highest net gains in C per ha are realised when converting from conventional to no-tillage practices in the high-activity clay soils of the High Rainfall and Wimmera agro-ecological zones. On the basis of total area available for change, the Slopes agro-ecological zone offers the highest net returns, potentially sequestering an additional 7.1 Mt C under no-tillage scenario over 20 years. The economic analysis was summarised as C supply curves for each of the 6 zones expressing the total additional C accumulated over 20 years for a price per t C sequestered ranging from zero to AU$200. For a price of $50/Mg C, a total of 427 000 Mg C would be sequestered over 20 years across the Southern Region, <5% of the simulated C sequestration potential of 9.1 Mt for the region. The Wimmera and Mid-North offer the largest gains in C under minimum tillage over 20 years of all zones for all C prices. For the no-tillage scenario, for a price of $50/Mg C, 1.74 Mt C would be sequestered over 20 years across the Southern Region, <10% of the simulated C sequestration potential of 18.6 Mt for the region over 20 years. The Slopes agro-ecological zone offers the best return in C over 20 years under no-tillage for all C prices. The Mallee offers the least return for both minimum and no-tillage scenarios. At a price of $200/Mg C, the transition from conventional tillage to minimum or no-tillage practices will only realise 19% and 33%, respectively, of the total biogeochemical sequestration potential of crop and pasture systems of the Southern Region over a 20-year period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the structure of title compound [Cs2(C7H5N2O4)2(H2O)2]n the asymmetric unit comprises two independent and different Cs centres, one nine-coordinate, the other seven coordinate, with both having irregular stereochemistry. The CsO9 coordination comprises oxygen donors from three bridging water molecules, one of which is doubly bridging, three from carboxylate groups, and three from nitro groups, of which two are bidentate chelate bridging. The CsO6N coordination comprises the two bridging water molecules, one amine N donor, one carboxyl O donor and four O donors from nitro groups (two from the chelate bridges). The extension of the dimeric unit gives a two-dimensional polymeric structure which is stabilized by both intra- and intermolecular amine N-H...O and water O-H...O hydrogen bonds to carboxyl O acceptors, as well as inter-ring pi-pi interactions [minimum ring centroid separation, 3.4172(15)A].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.