236 resultados para valve flat organizzazione videogiochi computer distribuzione digitale steam
Resumo:
Conventional planning and decision making, with its sectoral and territorial emphasis and flat-map based processes are no longer adequate or appropriate for the increased complexity confronting airport/city interfaces. These crowed and often contested governance spaces demand a more iterative and relational planning and decision-making approach. Emergent GIS based planning and decision-making tools provide a mechanism which integrate and visually display an array of complex data, frameworks and scenarios/expectations, often in ‘real time’ computations. In so doing, these mechanisms provide a common ground for decision making and facilitate a more ‘joined-up’ approach to airport/city planning. This paper analyses the contribution of the Airport Metropolis Planning Support System (PSS) to sub-regional planning in the Brisbane Airport case environment.
Resumo:
Visual modes of representation have always been very important in science and science education. Interactive computer-based animations and simulations offer new visual resources for chemistry education. Many studies have shown that students enjoy learning with visualisations but few have explored how learning outcomes compare when teaching with or without visualisations. This study employs a quasi-experimental crossover research design and quantitative methods to measure the educational effectiveness - defined as level of conceptual development on the part of students - of using computer-based scientific visualisations versus teaching without visualisations in teaching chemistry. In addition to finding that teaching with visualisations offered outcomes that were not significantly different from teaching without visualisations, the study also explored differences in outcomes for male and female students, students with different learning styles (visual, aural, kinesthetic) and students of differing levels of academic ability.
Resumo:
A new scaling analysis has been performed for the unsteady natural convection boundary layer under a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages including a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as numerical results. Earlier scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings performed very well with Rayleigh number and aspect ratio dependency. In this study, a new Prandtl number scaling has been developed using a triple-layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the new scaling performs considerably better than the previous scaling.
Resumo:
Twitter is now well established as the world’s second most important social media platform, after Facebook. Its 140-character updates are designed for brief messaging, and its network structures are kept relatively flat and simple: messages from users are either public and visible to all (even to unregistered visitors using the Twitter website), or private and visible only to approved ‘followers’ of the sender; there are no more complex definitions of degrees of connection (family, friends, friends of friends) as they are available in other social networks. Over time, Twitter users have developed simple, but effective mechanisms for working around these limitations: ‘#hashtags’, which enable the manual or automatic collation of all tweets containing the same #hashtag, as well allowing users to subscribe to content feeds that contain only those tweets which feature specific #hashtags; and ‘@replies’, which allow senders to direct public messages even to users whom they do not already follow. This paper documents a methodology for extracting public Twitter activity data around specific #hashtags, and for processing these data in order to analyse and visualize the @reply networks existing between participating users – both overall, as a static network, and over time, to highlight the dynamic structure of @reply conversations. Such visualizations enable us to highlight the shifting roles played by individual participants, as well as the response of the overall #hashtag community to new stimuli – such as the entry of new participants or the availability of new information. Over longer timeframes, it is also possible to identify different phases in the overall discussion, or the formation of distinct clusters of preferentially interacting participants.
Resumo:
A road bridge containing disused flatbed rail wagons as the primary deck superstructure was performance tested in a low volume, high axle load traffic road in Queensland, Australia; some key results are presented in this paper. A fully laden truck of total weight 28.88 % of the serviceability design load prescribed in the Australian bridge code was used; its wheel positions were accurately captured using a high speed camera and synchronised with the real‐time deflections and strains measured at the critical members of the flat rail wagons. The strains remained well below the yield and narrated the existence of composite action between the reinforced concrete slab pavement and the wagon deck. A three dimensional grillage model was developed and calibrated using the test data, which established the structural adequacy of the rail wagons and the positive contribution of the reinforced concrete slab pavement to resist high axle traffic loads on a single lane bridge in the low volume roads network.
Resumo:
Internet and computer addiction has been a popular research area since the 90s. Studies on Internet and computer addiction have usually been conducted in the US, and the investigation of computer and Internet addiction at different countries is an interesting area of research. This study investigates computer and Internet addiction among teenagers and Internet cafe visitors in Turkey. We applied a survey to 983 visitors in the Internet cafes. The results show that the Internet cafe visitors are usually teenagers, mostly middle and high-school students and usually are busy with computer and Internet applications like chat, e-mail, browsing and games. The teenagers come to the Internet cafe to spend time with friends and the computers. In addition, about 30% of cafe visitors admit to having an Internet addiction, and about 20% specifically mention the problems that they are having with the Internet. It is rather alarming to consider the types of activities that the teenagers are performing in an Internet cafe, their reasons for being there, the percentage of self-awareness about Internet addiction, and the lack of control of applications in the cafe.
Resumo:
An improved scaling analysis and direct numerical simulations are performed for the unsteady natural convection boundary layer adjacent to a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages: a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as the numerical results. Previous scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings perform very well with Rayleigh number and aspect ratio dependency. In this study, a modified Prandtl number scaling is developed using a triple layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the modified scaling performs considerably better than the previous scaling.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
At St Thomas' Hospital, we have developed a computer program on a Titan graphics supercomputer to plan the stereotactic implantation of iodine-125 seeds for the palliative treatment of recurrent malignant gliomas. Use of the Gill-Thomas-Cosman relocatable frame allows planning and surgery to be carried out at different hospitals on different days. Stereotactic computed tomography (CT) and positron emission tomography (PET) scans are performed and the images transferred to the planning computer. The head, tumour and frame fiducials are outlined on the relevant images, and a three-dimensional model generated. Structures which could interfere with the surgery or radiotherapy, such as major vessels, shunt tubing etc., can also be outlined and included in the display. Catheter target and entry points are set using a three-dimensional cursor controlled by a set of dials attached to the computer. The program calculates and displays the radiation dose distribution within the target volume for various catheter and seed arrangements. The CT co-ordinates of the fiducial rods are used to convert catheter co-ordinates from CT space to frame space and to calculate the catheter insertion angles and depths. The surgically implanted catheters are after-loaded the next day and the seeds left in place for between 4 and 6 days, giving a nominal dose of 50 Gy to the edge of the target volume. 25 patients have been treated so far.
Resumo:
The integration of unmanned aircraft into civil airspace is a complex issue. One key question is whether unmanned aircraft can operate just as safely as their manned counterparts. The absence of a human pilot in unmanned aircraft automatically points to a deficiency that is the lack of an inherent see-and-avoid capability. To date, regulators have mandated that an “equivalent level of safety” be demonstrated before UAVs are permitted to routinely operate in civil airspace. This chapter proposes techniques, methods, and hardware integrations that describe a “sense-and-avoid” system designed to address the lack of a see-and-avoid capability in UAVs.
Resumo:
It is found in the literature that the existing scaling results for the boundary layer thickness, velocity and steady state time for the natural convection flow over an evenly heated plate provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings provide a good prediction of two other governing parameters’ dependency, the Rayleigh number and the aspect ratio. Therefore, an improved scaling analysis using a triple-layer integral approach and direct numerical simulations have been performed for the natural convection boundary layer along a semi-infinite flat plate with uniform surface heat flux. This heat flux is a ramp function of time, where the temperature gradient on the surface increases with time up to some specific time and then remains constant. The growth of the boundary layer strongly depends on the ramp time. If the ramp time is sufficiently long, the boundary layer reaches a quasi steady mode before the growth of the temperature gradient is completed. In this mode, the thermal boundary layer at first grows in thickness and then contracts with increasing time. However, if the ramp time is sufficiently short, the boundary layer develops differently, but after the wall temperature gradient growth is completed, the boundary layer develops as though the startup had been instantaneous.
Resumo:
Background: Pre-participation screening is commonly used to measure and assess potential intrinsic injury risk. The single leg squat is one such clinical screening measure used to assess lumbopelvic stability and associated intrinsic injury risk. With the addition of a decline board, the single leg decline squat (SLDS) has been shown to reduce ankle dorsiflexion restrictions and allowed greater sagittal plane movement of the hip and knee. On this basis, the SLDS has been employed in the Cricket Australia physiotherapy screening protocols as a measure of lumbopelvic control in the place of the more traditional single leg flat squat (SLFS). Previous research has failed to demonstrate which squatting technique allows for a more comprehensive assessment of lumbopelvic stability. Tenuous links are drawn between kinematics and hip strength measures within the literature for the SLS. Formal evaluation of subjective screening methods has also been suggested within the literature. Purpose: This study had several focal points namely 1) to compare the kinematic differences between the two single leg squatting conditions, primarily the five key kinematic variables fundamental to subjectively assess lumbopelvic stability; 2) determine the effect of ankle dorsiflexion range of motion has on squat kinematics in the two squat techniques; 3) examine the association between key kinematics and subjective physiotherapists’ assessment; and finally 4) explore the association between key kinematics and hip strength. Methods: Nineteen (n=19) subjects performed five SLDS and five SLFS on each leg while being filmed by an 8 camera motion analysis system. Four hip strength measures (internal/external rotation and abd/adduction) and ankle dorsiflexion range of motion were measured using a hand held dynamometer and a goniometer respectively on 16 of these subjects. The same 16 participants were subjectively assessed by an experienced physiotherapist for lumbopelvic stability. Paired samples t-tests were performed on the five predetermined kinematic variables to assess the differences between squat conditions. A Bonferroni correction for multiple comparisons was used which adjusted the significance value to p = 0.005 for the paired t-tests. Linear regressions were used to assess the relationship between kinematics, ankle range of motion and hip strength measures. Bivariate correlations between hip strength measures and kinematics and pelvic obliquity were employed to investigate any possible relationships. Results: 1) Significant kinematic differences between squats were observed in dominant (D) and non-dominant (ND) end of range hip external rotation (ND p = <0.001; D p = 0.004) and hip adduction kinematics (ND p = <0.001; D p = <0.001). With the mean angle, only the non-dominant leg observed significant differences in hip adduction (p = 0.001) and hip external rotation (p = <0.001); 2) Significant linear relationships were observed between clinical measures of ankle dorsiflexion and sagittal plane kinematic namely SLFS dominant ankle (p = 0.006; R2 = .429), SLFS non-dominant knee (p = 0.015; R2 = .352) and SLFS non-dominant ankle (p = 0.027; R2 = .305) kinematics. Only the dominant ankle (p = 0.020; R2 = .331) was found to have a relationship with the decline squat. 3) Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed. 4) For the non-dominant leg, external rotation strength and abduction strength were found to be significantly correlated with hip rotation kinematics (Newtons r = 0.458 p = 0.049; Normalised for bodyweight: r = 0.469; p = 0.043) and pelvic obliquity (normalised for bodyweight: r = 0.498 p = 0.030) respectively for the SLFS only. No significant relationships were observed in the dominant leg for either squat condition. Some elements of the hip strength screening protocols had linear relationships with kinematics of the lower limb, particularly the sagittal plane movements of the knee and ankle. Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed; Discussion: The key finding of this study illustrated that kinematic differences can occur at the hip without significant kinematic differences at the knee as a result of the introduction of a decline board. Further observations reinforce the role of limited ankle dorsiflexion range of motion on sagittal plane movement of the hip and knee and in turn multiplanar kinematics of the lower limb. The kinematic differences between conditions have clinical implications for screening protocols that employ frontal plane movement of the knee as a guide for femoral adduction and rotation. Subjects who returned stronger hip strength measurements also appeared to squat deeper as characterised by differences in sagittal plane kinematics of the knee and ankle. Despite the aforementioned findings, the relationship between hip strength and lower limb kinematics remains largely tenuous in the assessment of the lumbopelvic stability using the SLS. The association between kinematics and the subjective measures of lumbopelvic stability also remain tenuous between and within SLS screening protocols. More functional measures of hip strength are needed to further investigate these relationships. Conclusion: The type of SLS (flat or decline) should be taken into account when screening for lumbopelvic stability. Changes to lower limb kinematics, especially around the hip and pelvis, were observed with the introduction of a decline board despite no difference in frontal plane knee movements. Differences in passive ankle dorsiflexion range of motion yielded variations in knee and ankle kinematics during a self-selected single leg squatting task. Clinical implications of removing posterior ankle restraints and using the knee as a guide to illustrate changes at the hip may result in inaccurate screening of lumbopelvic stability. The relationship between sagittal plane lower limb kinematics and hip strength may illustrate that self-selected squat depth may presumably be a useful predictor of the lumbopelvic stability. Further research in this area is required.
Resumo:
A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.