955 resultados para Limit theorems


Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The LiteSteel Beam (LSB) is a new hollow flange channel section developed by OneSteel Australian Tube Mills using a patented Dual Electric Resistance Welding technique. The LSB has a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. It is commonly used as rafters, floor joists and bearers and roof beams in residential, industrial and commercial buildings. It is on average 40% lighter than traditional hot-rolled steel beams of equivalent performance. The LSB flexural members are subjected to a relatively new Lateral Distortional Buckling mode, which reduces the member moment capacity. Unlike the commonly observed lateral torsional buckling of steel beams, lateral distortional buckling of LSBs is characterised by simultaneous lateral deflection, twist and web distortion. Current member moment capacity design rules for lateral distortional buckling in AS/NZS 4600 (SA, 2005) do not include the effect of section geometry of hollow flange beams although its effect is considered to be important. Therefore detailed experimental and finite element analyses (FEA) were carried out to investigate the lateral distortional buckling behaviour of LSBs including the effect of section geometry. The results showed that the current design rules in AS/NZS 4600 (SA, 2005) are over-conservative in the inelastic lateral buckling region. New improved design rules were therefore developed for LSBs based on both FEA and experimental results. A geometrical parameter (K) defined as the ratio of the flange torsional rigidity to the major axis flexural rigidity of the web (GJf/EIxweb) was identified as the critical parameter affecting the lateral distortional buckling of hollow flange beams. The effect of section geometry was then included in the new design rules using the new parameter (K). The new design rule developed by including this parameter was found to be accurate in calculating the member moment capacities of not only LSBs, but also other types of hollow flange steel beams such as Hollow Flange Beams (HFBs), Monosymmetric Hollow Flange Beams (MHFBs) and Rectangular Hollow Flange Beams (RHFBs). The inelastic reserve bending capacity of LSBs has not been investigated yet although the section moment capacity tests of LSBs in the past revealed that inelastic reserve bending capacity is present in LSBs. However, the Australian and American cold-formed steel design codes limit them to the first yield moment. Therefore both experimental and FEA were carried out to investigate the section moment capacity behaviour of LSBs. A comparison of the section moment capacity results from FEA, experiments and current cold-formed steel design codes showed that compact and non-compact LSB sections classified based on AS 4100 (SA, 1998) have some inelastic reserve capacity while slender LSBs do not have any inelastic reserve capacity beyond their first yield moment. It was found that Shifferaw and Schafer’s (2008) proposed equations and Eurocode 3 Part 1.3 (ECS, 2006) design equations can be used to include the inelastic bending capacities of compact and non-compact LSBs in design. As a simple design approach, the section moment capacity of compact LSB sections can be taken as 1.10 times their first yield moment while it is the first yield moment for non-compact sections. For slender LSB sections, current cold-formed steel codes can be used to predict their section moment capacities. It was believed that the use of transverse web stiffeners could improve the lateral distortional buckling moment capacities of LSBs. However, currently there are no design equations to predict the elastic lateral distortional buckling and member moment capacities of LSBs with web stiffeners under uniform moment conditions. Therefore, a detailed study was conducted using FEA to simulate both experimental and ideal conditions of LSB flexural members. It was shown that the use of 3 to 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges of LSBs at third span points and supports provided an optimum web stiffener arrangement. Suitable design rules were developed to calculate the improved elastic buckling and ultimate moment capacities of LSBs with these optimum web stiffeners. A design rule using the geometrical parameter K was also developed to improve the accuracy of ultimate moment capacity predictions. This thesis presents the details and results of the experimental and numerical studies of the section and member moment capacities of LSBs conducted in this research. It includes the recommendations made regarding the accuracy of current design rules as well as the new design rules for lateral distortional buckling. The new design rules include the effects of section geometry of hollow flange steel beams. This thesis also developed a method of using web stiffeners to reduce the lateral distortional buckling effects, and associated design rules to calculate the improved moment capacities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty, and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exists where the more mind changes the learner is willing to accept, the lesser the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The topic of the present work is to study the relationship between the power of the learning algorithms on the one hand, and the expressive power of the logical language which is used to represent the problems to be learned on the other hand. The central question is whether enriching the language results in more learning power. In order to make the question relevant and nontrivial, it is required that both texts (sequences of data) and hypotheses (guesses) be translatable from the “rich” language into the “poor” one. The issue is considered for several logical languages suitable to describe structures whose domain is the set of natural numbers. It is shown that enriching the language does not give any advantage for those languages which define a monadic second-order language being decidable in the following sense: there is a fixed interpretation in the structure of natural numbers such that the set of sentences of this extended language true in that structure is decidable. But enriching the original language even by only one constant gives an advantage if this language contains a binary function symbol (which will be interpreted as addition). Furthermore, it is shown that behaviourally correct learning has exactly the same power as learning in the limit for those languages which define a monadic second-order language with the property given above, but has more power in case of languages containing a binary function symbol. Adding the natural requirement that the set of all structures to be learned is recursively enumerable, it is shown that it pays o6 to enrich the language of arithmetics for both finite learning and learning in the limit, but it does not pay off to enrich the language for behaviourally correct learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although current assessments of agricultural management practices on soil organic C (SOC) dynamics are usually conducted without any explicit consideration of limits to soil C storage, it has been hypothesized that the SOC pool has an upper, or saturation limit with respect to C input levels at steady state. Agricultural management practices that increase C input levels over time produce a new equilibrium soil C content. However, multiple C input level treatments that produce no increase in SOC stocks at equilibrium show that soils have become saturated with respect to C inputs. SOC storage of added C input is a function of how far a soil is from saturation level (saturation deficit) as well as C input level. We tested experimentally if C saturation deficit and varying C input levels influenced soil C stabilization of added C-13 in soils varying in SOC content and physiochemical characteristics. We incubated for 2.5 years soil samples from seven agricultural sites that were closer to (i.e., A-horizon) or further from (i.e., C-horizon) their C saturation limit. At the initiation of the incubations, samples received low or high C input levels of 13 C-labeled wheat straw. We also tested the effect of Ca addition and residue quality on a subset of these soils. We hypothesized that the proportion of C stabilized would be greater in samples with larger C Saturation deficits (i.e., the C- versus A-horizon samples) and that the relative stabilization efficiency (i.e., Delta SCC/Delta C input) would decrease as C input level increased. We found that C saturation deficit influenced the stabilization of added residue at six out of the seven sites and C addition level affected the stabilization of added residue in four sites, corroborating both hypotheses. Increasing Ca availability or decreasing residue quality had no effect on the stabilization of added residue. The amount of new C stabilized was significantly related to C saturation deficit, supporting the hypothesis that C saturation influenced C stabilization at all our sites. Our results suggest that soils with low C contents and degraded lands may have the greatest potential and efficiency to store added C because they are further from their saturation level. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The soil C saturation concept suggests a limit to whole soil organic carbon (SOC) accumulation determined by inherent physicochemical characteristics of four soil C pools: unprotected, physically protected, chemically protected, and biochemically protected. Previous attempts to quantify soil C sequestration capacity have focused primarily on silt and clay protection and largely ignored the effects of soil structural protection and biochemical protection. We assessed two contrasting models of SOC accumulation, one with no saturation limit (i.e., linear first-order model) and one with an explicit soil C saturation limit (i.e., C saturation model). We isolated soil fractions corresponding to the C pools (i.e., free particulate organic matter POM], microaggregate-associated C, silt- and clay-associated C, and non-hydrolyzable C) from eight long-term agroecosystern experiments across the United States and Canada. Due to the composite nature of the physically protected C pool, we firactioned it into mineral- vs. POM-associated C. Within each site, the number of fractions fitting the C saturation model was directly related to maximum SOC content, suggesting that a broad range in SOC content is necessary to evaluate fraction C saturation. The two sites with the greatest SOC range showed C saturation behavior in the chemically, biochemically, and some mineral-associated fractions of the physically protected pool. The unprotected pool and the aggregate-protected POM showed linear, nonsaturating behavior. Evidence of C saturation of chemically and biochemically protected SOC pools was observed at sites far from their theoretical C saturation level, while saturation of aggregate-protected fractions occurred in soils closer to their C saturation level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current estimates of soil C storage potential are based on models or factors that assume linearity between C input levels and C stocks at steady-state, implying that SOC stocks could increase without limit as C input levels increase. However, some soils show little or no increase in steady-state SOC stock with increasing C input levels suggesting that SOC can become saturated with respect to C input. We used long-term field experiment data to assess alternative hypotheses of soil carbon storage by three simple models: a linear model (no saturation), a one-pool whole-soil C saturation model, and a two-pool mixed model with C saturation of a single C pool, but not the whole soil. The one-pool C saturation model best fit the combined data from 14 sites, four individual sites were best-fit with the linear model, and no sites were best fit by the mixed model. These results indicate that existing agricultural field experiments generally have too small a range in C input levels to show saturation behavior, and verify the accepted linear relationship between soil C and C input used to model SOM dynamics. However, all sites combined and the site with the widest range in C input levels were best fit with the C-saturation model. Nevertheless, the same site produced distinct effective stabilization capacity curves rather than an absolute C saturation level. We conclude that the saturation of soil C does occur and therefore the greatest efficiency in soil C sequestration will be in soils further from C saturation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Excessive grazing pressure is detrimental to plant productivity and may lead to declines in soil organic matter. Soil organic matter is an important source of plant nutrients and can enhance soil aggregation, limit soil erosion, and can also increase cation exchange and water holding capacities, and is, therefore, a key regulator of grassland ecosystem processes. Changes in grassland management which reverse the process of declining productivity can potentially lead to increased soil C. Thus, rehabilitation of areas degraded by overgrazing can potentially sequester atmospheric C. We compiled data from the literature to evaluate the influence of grazing intensity on soil C. Based on data contained within these studies, we ascertained a positive linear relationship between potential C sequestration and mean annual precipitation which we extrapolated to estimate global C sequestration potential with rehabilitation of overgrazed grassland. The GLASOD and IGBP DISCover data sets were integrated to generate a map of overgrazed grassland area for each of four severity classes on each continent. Our regression model predicted losses of soil C with decreased grazing intensity in drier areas (precipitation less than 333 mm yr(-1)), but substantial sequestration in wetter areas. Most (93%) C sequestration potential occurred in areas with MAP less than 1800 mm. Universal rehabilitation of overgrazed grasslands can sequester approximately 45 Tg C yr(-1), most of which can be achieved simply by cessation of overgrazing and implementation of moderate grazing intensity. Institutional level investments by governments may be required to sequester additional C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A nutrient amendment experiment was conducted for two growing seasons in two alpine tundra communities to test the hypotheses that: (1) primary production is limited by nutrient availability, and (2) physiological and developmental constraints act to limit the responses of plants from a nutrient-poor community more than plants from a more nutrient-rich community to increases in nutrient availability. Experimental treatments consisted of N, P, and N+P amendments applied to plots in two physiognomically similar communities, dry and wet meadows. Extractable N and P from soils in nonfertilized control plots indicated that the wet meadow had higher N and P availability. Photosynthetic, nutrient uptake, and growth responses of the dominants in the two communities showed little difference in the relative capacity of these plants to respond to the nutrient additions. Aboveground production responses of the communities to the treatments indicated N availability was limiting to production in the dry meadow community while N and P availability colimited production in the wet meadow community. There was a greater production response to the N and N+P amendments in the dry meadow relative to the wet meadow, despite equivalent functional responses of the dominant species of both communities. The greater production response in the dry meadow was in part related to changes in community structure, with an increase in the proportion of graminoid and forb biomass, and a decrease in the proportion of community biomass made up by the dominant sedge Kobresia myosuroides. Species richness increased significantly in response to the N+P treatment in the dry meadow. Graminoid biomass increased significantly in the wet meadow N and N+P plots, while forb biomass decreased significantly, suggesting a competitive interaction for light. Thus, the difference in community response to nutrient amendments was not the result of functional changes at the leaf level of the dominant species, but rather was related to changes in community structure in the dry meadow, and to a shift from a nutrient to a light limitation of production in the wet meadow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a severe tendency in cyberlaw theory to delegitimize state intervention in the governance of virtual communities. Much of the existing theory makes one of two fundamental flawed assumptions: that communities will always be best governed without the intervention of the state; or that the territorial state can best encourage the development of communities by creating enforceable property rights and allowing the market to resolve any disputes. These assumptions do not ascribe sufficient weight to the value-laden support that the territorial state always provides to private governance regimes, the inefficiencies that will tend to limit the development utopian communities, and the continued role of the territorial state in limiting autonomy in accordance with communal values. In order to overcome these deterministic assumptions, this article provides a framework based upon the values of the rule of law through which to conceptualise the legitimacy of the private exercise of power in virtual communities. The rule of law provides a constitutional discourse that assists in considering appropriate limits on the exercise of private power. I argue that the private contractual framework that is used to govern relations in virtual communities ought to be informed by the values of the rule of law in order to more appropriately address the governance tensions that permeate these spaces. These values suggest three main limits to the exercise of private power: that governance is limited by community rules and that the scope of autonomy is limited by the substantive values of the territorial state; that private contractual rules should be general, equal, and certain; and that, most importantly, internal norms be predicated upon the consent of participants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several sets of changes have been made to motorcycle licensing in Queensland since 2007, with the aim of improving the safety of novice riders. These include a requirement that a motorcycle learner licence applicant must have held a provisional or open car licence for 12 months, and imposing a 3 year limit for learner licence renewal. Additionally, a requirement to hold an RE (250 cc limited) class licence for a period of 12 months prior to progressing to an R class licence was introduced for Q-RIDE. This paper presents analyses of licensing transaction data that examine the effects of the licensing changes on the duration that the learner licence was held, factors affecting this duration and the extent to which the demographic characteristics of learner licence holders changed. The likely safety implications of the observed changes are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To compare subjective blur limits for cylinder and defocus. ---------- Method: Blur was induced with a deformable, adaptive-optics mirror when either the subjects’ own astigmatisms were corrected or when both astigmatisms and higher-order aberrations were corrected. Subjects were cyclopleged and had 5 mm artificial pupils. Black letter targets (0.1, 0.35 and 0.6 logMAR) were presented on white backgrounds. Results: For ten subjects, blur limits were approximately 50% greater for cylinder than for defocus (in diopters). While there were considerable effects of axis for individuals, overall this was not strong, with the 0° (or 180°) axis having about 20% greater limits than oblique axes. In a second experiment with text (equivalent in angle to N10 print at 40 cm distance), cylinder blur limits for 6 subjects were approximately 30% greater than those for defocus; this percentage was slightly smaller than for the three letters. Blur limits of the text were intermediate between those of 0.35 logMAR and 0.6 logMAR letters. Extensive blur limit measurements for one subject with single letters did not show expected interactions between target detail orientation and cylinder axis. ---------- Conclusion: Subjective blur limits for cylinder are 30%-50% greater than those for defocus, with the overall influence of cylinder axis being 20%.