894 resultados para Non Standard Analysis
Resumo:
This paper uses the large-scale Cranet data to explore the extent of non-standard working time (NSWT) across Europe and to highlight the contrasts and similarities between two different varieties of capitalism (coordinated market economies and liberal market economies). We explore variations in the extent of different forms of NSWT (overtime, shift working and weekend working) within these two different forms of capitalism, controlling for firm size, sector and the extent of employee voice. Overall, there was no strong link between the variety of capitalism and the use of overtime and weekend working though shift working showed a clear distinction between the two varieties of capitalism. Usage of NSWT in some service sectors was particularly high under both forms of capitalism and service sector activities had a particularly marked influence on the use of overtime in liberal market economies. Surprisingly, strong employee voice was associated with greater use of NSWT.
Resumo:
Overhead distribution lines are often exposed to lightning overvoltages, whose waveforms vary widely and can differ substantially from the standard impulse voltage waveform (1,2 - 50). Different models have been proposed for predicting the strength of insulation subjected to impulses of non-standard waveforms. One of the most commonly used is the disruptive effect model, for which there are different methods for the estimation of the parameters required for its application. This paper aims at evaluating the dielectric behavior of medium voltage insulators subjected to impulses of non-standard waveforms, as well as at evaluating two methods for predicting their dielectric strength against such impulses. The test results relative to the critical lightning impulse flashover voltage (U50) and the volt-time characteristics obtained for the positive and negative polarities of different voltage waveforms are presented and discussed.
Resumo:
Non-failure analysis aims at inferring that predicate calis in a program will never fail. This type of information has many applications in functional/logic programming. It is essential for determining lower bounds on the computational cost of calis, useful in the context of program parallelization, instrumental in partial evaluation and other program transformations, and has also been used in query optimization. In this paper, we re-cast the non-failure analysis proposed by Debray et al. as an abstract interpretation, which not only allows to investígate it from a standard and well understood theoretical framework, but has also several practical advantages. It allows us to incorpórate non-failure analysis into a standard, generic abstract interpretation engine. The analysis thus benefits from the fixpoint propagation algorithm, which leads to improved information propagation. Also, the analysis takes advantage of the multi-variance of the generic engine, so that it is now able to infer sepárate non-failure information for different cali patterns. Moreover, the implementation is simpler, and allows to perform non-failure and covering analyses alongside other analyses, such as those for modes and types, in the same framework. Finally, besides the precisión improvements and the additional simplicity, our implementation (in the Ciao/CiaoPP multiparadigm programming system) also shows better efRciency.
Resumo:
Aim A new method of penumbral analysis is implemented which allows an unambiguous determination of field size and penumbra size and quality for small fields and other non-standard fields. Both source occlusion and lateral electronic disequilibrium will affect the size and shape of cross-axis profile penumbrae; each is examined in detail. Method A new method of penumbral analysis is implemented where the square of the derivative of the cross-axis profile is plotted. The resultant graph displays two peaks in the place of the two penumbrae. This allows a strong visualisation of the quality of a field penumbra, as well as a mathematically consistent method of determining field size (distance between the two peak’s maxima), and penumbra (full-widthtenth-maximum of peak). Cross-axis profiles were simulated in a water phantom at a depth of 5 cm using Monte Carlo modelling, for field sizes between 5 and 30 mm. The field size and penumbra size of each field was calculated using the method above, as well as traditional definitions set out in IEC976. The effect of source occlusion and lateral electronic disequilibrium on the penumbrae was isolated by repeating the simulations removing electron transport and using an electron spot size of 0 mm, respectively. Results All field sizes calculated using the traditional and proposed methods agreed within 0.2 mm. The penumbra size measured using the proposed method was systematically 1.8 mm larger than the traditional method at all field sizes. The size of the source had a larger effect on the size of the penumbra than did lateral electronic disequilibrium, particularly at very small field sizes. Conclusion Traditional methods of calculating field size and penumbra are proved to be mathematically adequate for small fields. However, the field size definition proposed in this study would be more robust amongst other nonstandard fields, such as flattening filter free. Source occlusion plays a bigger role than lateral electronic disequilibrium in small field penumbra size.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
Usually referral letters are the only means of communication between general practitioners and specialists in the health area. However, they are inadequate if important basic data are omitted. The aim of this study was to compare the content of standard and non-standard letters. A total of 1956 files from the Oral Medicine Service were consecutively evaluated (March 1996 to September 2000). Key items were considered for analysis and the results were stored in a database using the Epinfo 6.04 program. The X-2 test (a=0.05) was applied to the results. of the 1956 files examined, 34% (662) had a referral letter, 31% of them being standard letters and 69% non-standard letters. Most standard letters (87%) were from professionals of public health institutions. Most percent discrepancies between standard and non-standard letters were observed for patient address (14.90 vs 1.32%), patient age (54.81 vs 9.47%), chief complaint (32.21 vs 8.37%), fundamental lesion (29.33 vs 13.66%), and symptoms (27.81 vs 15.42%,,). Statistically significant differences were observed for patient age, professional referring the patient, chief complaint, and site of the lesion. The quality and quantity of the information differed significantly between the two types of letters. The standard letters were more complete and contained information commonly absent in the non-standard letters. We suggest the use of standard letters for improving the quality of communication among professionals.
Resumo:
Over the last few years Facebook has become a widespread and continuously expanding medium of communication. Being a new medium of social interaction, Facebook produces its own communication style. My focus of analysis is how Facebook users from the city of Malaga create this style by means of phonic features typical of the Andalusian variety and how the users reflect on the use of these phonic features. This project is based on a theoretical framework which combines variationist sociolinguistics with CMC to study the emergence of a style peculiar of the online social networks. In a corpus of Facebook users from three zones of Malaga, I have analysed the use of non-standard phonic features and then compared them with the same features in a reference corpus collected on three beaches of Malaga. From this comparison it can be deduced that the analysed social and linguistic factors work differently in real and virtual speech. Due to these different uses we can consider the peculiar electronic communication of Facebook as a style constrained by the electronic medium. It is a style which serves the users to create social meaning and to express their linguistic identities.
Resumo:
Across the last four decades, the structure of the Australian labour market has changed profoundly as non-standard forms of employment have become more prevalent. According to many researchers, the growth of non-standard work has been driven by employee preferences, particularly among married women, for greater flexibility to balance paid work with domestic responsibilities and other non-work related pursuits. In contrast, other researchers argue that the increasing prevalence of non-standard employment reflects employer demands for greater staffing flexibility. From this perspective, non-standard forms of employment are considered to have a negative effect on work-family balance. This paper explores whether non-standard employment is associated with improved or poorer work-to-family conflict and tests whether experiences vary by gender. It concentrates on three common forms of non-standard employment: part-time employment, casual and fixed-term work contracts and flexible scheduling practices (such as evening work, weekend work and irregular rostering). Analysis is based on 2299 employed parents from the first wave of the Household, Income and Labour Dynamics on Australia (HILDA) project. Results show that few scheduling measures are significant determinants of work-family balance. However, part-time employment is associated with reduced work-to-family strain for both men and women, even after controlling for various other employment and household related characteristics. Casual employment, in contrast, incurs the cost of poorer work-family balance for men. Surprisingly, HILDA data show that overall men experience greater work-to-family strain than women.
Resumo:
This paper presents a higher-order beam-column formulation that can capture the geometrically non-linear behaviour of steel framed structures which contain a multiplicity of slender members. Despite advances in computational frame software, analyses of large frames can still be problematic from a numerical standpoint and so the intent of the paper is to fulfil a need for versatile, reliable and efficient non-linear analysis of general steel framed structures with very many members. Following a comprehensive review of numerical frame analysis techniques, a fourth-order element is derived and implemented in an updated Lagrangian formulation, and it is able to predict flexural buckling, snap-through buckling and large displacement post-buckling behaviour of typical structures whose responses have been reported by independent researchers. The solutions are shown to be efficacious in terms of a balance of accuracy and computational expediency. The higher-order element forms a basis for augmenting the geometrically non-linear approach with material non-linearity through the refined plastic hinge methodology described in the companion paper.
Resumo:
In the companion paper, a fourth-order element formulation in an updated Lagrangian formulation was presented to handle geometric non-linearities. The formulation of the present paper extends this to include material non-linearity by proposing a refined plastic hinge approach to analyse large steel framed structures with many members, for which contemporary algorithms based on the plastic zone approach can be problematic computationally. This concept is an advancement of conventional plastic hinge approaches, as the refined plastic hinge technique allows for gradual yielding, being recognized as distributed plasticity across the element section, a condition of full plasticity, as well as including strain hardening. It is founded on interaction yield surfaces specified analytically in terms of force resultants, and achieves accurate and rapid convergence for large frames for which geometric and material non-linearity are significant. The solutions are shown to be efficacious in terms of a balance of accuracy and computational expediency. In addition to the numerical efficiency, the present versatile approach is able to capture different kinds of material and geometric non-linearities on general applications of steel structures, and thereby it offers an efficacious and accurate means of assessing non-linear behaviour of the structures for engineering practice.
Resumo:
Objectives To investigate whether a sudden temperature change between neighboring days has significant impact on mortality. Methods A Poisson generalized linear regression model combined with a distributed lag non-linear models was used to estimate the association of temperature change between neighboring days with mortality in a subtropical Chinese city during 2008–2012. Temperature change was calculated as the current day’s temperature minus the previous day’s temperature. Results A significant effect of temperature change between neighboring days on mortality was observed. Temperature increase was significantly associated with elevated mortality from non-accidental and cardiovascular diseases, while temperature decrease had a protective effect on non-accidental mortality and cardiovascular mortality. Males and people aged 65 years or older appeared to be more vulnerable to the impact of temperature change. Conclusions Temperature increase between neighboring days has a significant adverse impact on mortality. Further health mitigation strategies as a response to climate change should take into account temperature variation between neighboring days.
Resumo:
In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.