376 resultados para deep level


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND OR CONTEXT Thermodynamics is a core concept for mechanical engineers yet notoriously difficult. Evidence suggests students struggle to understand and apply the core fundamental concepts of thermodynamics with analysis indicating a problem with student learning/engagement. A contributing factor is that thermodynamics is a ‘science involving concepts based on experiments’ (Mayhew 1990) with subject matter that cannot be completely defined a priori. To succeed, students must engage in a deep-holistic approach while taking ownership of their learning. The difficulty in achieving this often manifests itself in students ‘not getting’ the principles and declaring thermodynamics ‘hard’. PURPOSE OR GOAL Traditionally, students practice and “learn” the application of thermodynamics in their tutorials, however these do not consider prior conceptions (Holman & Pilling 2004). As ‘hands on’ learning is the desired outcome of tutorials it is pertinent to study methods of improving their efficacy. Within the Australian context, the format of thermodynamics tutorials has remained relatively unchanged over the decades, relying anecdotally on a primarily didactic pedagogical approach. Such approaches are not conducive to deep learning (Ramsden 2003) with students often disengaged from the learning process. Evidence suggests (Haglund & Jeppsson 2012), however, that a deeper level and ownership of learning can be achieved using a more constructivist approach for example through self generated analogies. This pilot study aimed to collect data to support the hypothesis that the ‘difficulty’ of thermodynamics is associated with the pedagogical approach of tutorials rather than actual difficulty in subject content or deficiency in students. APPROACH Successful application of thermodynamic principles requires solid knowledge of the core concepts. Typically, tutorial sessions guide students in this application. However, a lack of deep and comprehensive understanding can lead to student confusion in the applications resulting in the learning of the ‘process’ of application without understanding ‘why’. The aim of this study was to gain empirical data on student learning of both concepts and application, within thermodynamic tutorials. The approach taken for data collection and analysis was: - 1 Four concurrent tutorial streams were timetabled to examine student engagement/learning in traditional ‘didactic’ (3 weeks) and non-traditional (3 weeks). In each week, two of the selected four sessions were traditional and two non-traditional. This provided a control group for each week. - 2 The non-traditional tutorials involved activities designed to promote student-centered deep learning. Specific pedagogies employed were: self-generated analogies, constructivist, peer-to-peer learning, inquiry based learning, ownership of learning and active learning. - 3 After a three-week period, teaching styles of the selected groups was switched, to allow each group to experience both approaches with the same tutor. This also acted to mimimise any influence of tutor personality / style on the data. - 4 At the conclusion of the trial participants completed a ‘5 minute essay’ on how they liked the sessions, a small questionnaire, modelled on the modified (Christo & Hoang, 2013)SPQ designed by Biggs (1987) and a small formative quiz to gauge the level of learning achieved. DISCUSSION Preliminary results indicate that overall students respond positively to in class demonstrations (inquiry based learning), and active learning activities. Within the active learning exercises, the current data suggests students preferred individual rather than group or peer-to-peer activities. Preliminary results from the open-ended questions such as “What did you like most/least about this tutorial” and “do you have other comments on how this tutorial could better facilitate your learning”, however, indicated polarising views on the nontraditional tutorial. Some student’s responded that they really like the format and emphasis on understanding the concepts, while others were very vocal that that ‘hated’ the style and just wanted the solutions to be presented by the tutor. RECOMMENDATIONS/IMPLICATIONS/CONCLUSION Preliminary results indicated a mixed, but overall positive response by students with more collaborative tutorials employing tasks promoting inquiry based, peer-to-peer, active, and ownership of learning activities. Preliminary results from student feedback supports evidence that students learn differently, and running tutorials focusing on only one pedagogical approached (typically didactic) may not be beneficial to all students. Further, preliminary data suggests that the learning / teaching style of both students and tutor are important to promoting deep learning in students. Data collection is still ongoing and scheduled for completion at the end of First Semester (Australian academic calendar). The final paper will examine in more detail the results and analysis of this project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a refined classic noise prediction method based on the VISSIM and FHWA noise prediction model is formulated to analyze the sound level contributed by traffic on the Nanjing Lukou airport connecting freeway before and after widening. The aim of this research is to (i) assess the traffic noise impact on the Nanjing University of Aeronautics and Astronautics (NUAA) campus before and after freeway widening, (ii) compare the prediction results with field data to test the accuracy of this method, (iii) analyze the relationship between traffic characteristics and sound level. The results indicate that the mean difference between model predictions and field measurements is acceptable. The traffic composition impact study indicates that buses (including mid-sized trucks) and heavy goods vehicles contribute a significant proportion of total noise power despite their low traffic volume. In addition, speed analysis offers an explanation for the minor differences in noise level across time periods. Future work will aim at reducing model error, by focusing on noise barrier analysis using the FEM/BEM method and modifying the vehicle noise emission equation by conducting field experimentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives Impaired muscle function is common in knee osteoarthritis (OA). Numerous biochemical molecules have been implicated in the development of OA; however, these have only been identified in the joint and serum. This study compared the expression of interleukin (IL-15) and Forkhead box protein-O1 (FoxO1) in muscle of patients with knee OA asymptomatic individuals, and examined whether IL-15 was also present in the joint and serum. Method Muscle and blood samples were collected from 19 patients with diagnosed knee OA and 10 age-matched asymptomatic individuals. Synovial fluid and muscle biopsies were collected from the OA group during knee replacement surgery. IL-15 and FoxO1were measured in the skeletal muscle. IL-15 abundance was also analysed in the serum of both groups and synovial fluid from the OA group. Knee extensor strength was measured and correlated with IL-15 and FoxO1 in the muscle. Results FoxO1 protein expression was higher (p=0.04), whereas IL-15 expression was lower (p=0.02) in the muscle of the OA group. Strength was also lower in the OA group, and was inversely correlated with FoxO1 expression. No correlation was found between IL-15 in the joint, muscle or serum. Conclusion Skeletal muscle, particularly the quadriceps, is affected in people with knee OA where elevated FoxO1 protein expression was associated with reduced muscle strength. While IL-15 protein expression in the muscle was lower in the knee OA group, no correlation was found between the expression of IL-15 protein in the muscle, joint and serum, which suggests that inflammation is regulated differently within these tissues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To investigate the threshold level of defocus that induces a measurable objective change in accommodation response to a target at an intermediate distance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purification of drinking water is routinely achieved by use of conventional coagulants and disinfection procedures. However, there are instances such as flood events when the level of turbidity reaches extreme levels while NOM may be an issue throughout the year. Consequently, there is a need to develop technologies which can effectively treat water of high turbidity during flood events and natural organic matter (NOM) content year round. It was our hypothesis that pebble matrix filtration potentially offered a relatively cheap, simple and reliable means to clarify such challenging water samples. Therefore, a laboratory scale pebble matrix filter (PMF) column was used to evaluate the turbidity and natural organic matter (NOM) pre-treatment performance in relation to 2013 Brisbane River flood water. Since the high turbidity was only a seasonal and short term problem, the general applicability of pebble matrix filters for NOM removal was also investigated. A 1.0 m deep bed of pebbles (the matrix) partly in-filled with either sand or crushed glass was tested, upon which was situated a layer of granular activated carbon (GAC). Turbidity was measured as a surrogate for suspended solids (SS), whereas, total organic carbon (TOC) and UV Absorbance at 254 nm were measured as surrogate parameters for NOM. Experiments using natural flood water showed that without the addition of any chemical coagulants, PMF columns achieved at least 50% turbidity reduction when the source water contained moderate hardness levels. For harder water samples, above 85% turbidity reduction was obtained. The ability to remove 50% turbidity without chemical coagulants may represent significant cost savings to water treatment plants and added environmental benefits accrue due to less sludge formation. A TOC reduction of 35-47% and UV-254 nm reduction of 24-38% was also observed. In addition to turbidity removal during flood periods, the ability to remove NOM using the pebble matrix filter throughout the year may have the benefit of reducing disinfection by-products (DBP) formation potential and coagulant demand at water treatment plants. Final head losses were remarkably low, reaching only 11 cm at a filtration velocity of 0.70 m/h.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a new multi-stage mine production timetabling (MMPT) model to optimise open-pit mine production operations including drilling, blasting and excavating under real-time mining constraints. The MMPT problem is formulated as a mixed integer programming model and can be optimally solved for small-size MMPT instances by IBM ILOG-CPLEX. Due to NP-hardness, an improved shifting-bottleneck-procedure algorithm based on the extended disjunctive graph is developed to solve large-size MMPT instances in an effective and efficient way. Extensive computational experiments are presented to validate the proposed algorithm that is able to efficiently obtain the near-optimal operational timetable of mining equipment units. The advantages are indicated by sensitivity analysis under various real-life scenarios. The proposed MMPT methodology is promising to be implemented as a tool for mining industry because it is straightforwardly modelled as a standard scheduling model, efficiently solved by the heuristic algorithm, and flexibly expanded by adopting additional industrial constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regulated transcription controls the diversity, developmental pathways and spatial organization of the hundreds of cell types that make up a mammal. Using single-molecule cDNA sequencing, we mapped transcription start sites (TSSs) and their usage in human and mouse primary cells, cell lines and tissues to produce a comprehensive overview of mammalian gene expression across the human body. We find that few genes are truly 'housekeeping', whereas many mammalian promoters are composite entities composed of several closely separated TSSs, with independent cell-type-specific expression profiles. TSSs specific to different cell types evolve at different rates, whereas promoters of broadly expressed genes are the most conserved. Promoter-based expression analysis reveals key transcription factors defining cell states and links them to binding-site motifs. The functions of identified novel transcripts can be predicted by coexpression and sample ontology enrichment analyses. The functional annotation of the mammalian genome 5 (FANTOM5) project provides comprehensive expression profiles and functional annotation of mammalian cell-type-specific transcriptomes with wide applications in biomedical research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is an examination of how organisational context variables affect the performance of new product development (NPD) teams. Specifically, the extent to how team empowerment climate and supervisory support for creativity impact NPD team performance. Moreover, this thesis is a step forward in the ongoing development of work role performance theory by examining Griffin et al.'s (2007) work role performance model in the context of NPD teams. This thesis addresses the lack of research exploring work role performance dimensions in NPD teams and the extent to which a team empowerment climate and supervisory support for creativity impact NPDs performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter provides a critical legal geography of outer Space, charting the topography of the debates and struggles around its definition, management, and possession. As the emerging field of critical legal geography demonstrates, law is not a neutral organiser of space, but is instead a powerful cultural technology of spatial production. Drawing on legal documents such as the Outer Space Treaty and the Moon Treaty, as well as on the analogous and precedent-setting legal geographies of Antarctica and the deep seabed, the chapter addresses key questions about the legal geography of outer Space, questions which are of growing importance as Space’s available satellite spaces in the geostationary orbit diminish, Space weapons and mining become increasingly viable, Space colonisation and tourism emerge, and questions about Space’s legal status grow in intensity. Who owns outer Space? Who, and whose rules, govern what may or may not (literally) take place there? Is the geostationary orbit the sovereign property of the equatorial states it supertends, as these states argued in the 1970s? Or is it a part of the res communis, or common property of humanity, which currently legally characterises outer Space? Does Space belong to no one, or to everyone? As challenges to the existing legal spatiality of outer Space emerge from spacefaring states, companies, and non-spacefaring states, it is particularly critical that the current spatiality of Space is understood and considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network Interfaces (NIs) are used in Multiprocessor System-on-Chips (MPSoCs) to connect CPUs to a packet switched Network-on-Chip. In this work we introduce a new NI architecture for our hierarchical CoreVA-MPSoC. The CoreVA-MPSoC targets streaming applications in embedded systems. The main contribution of this paper is a system-level analysis of different NI configurations, considering both software and hardware costs for NoC communication. Different configurations of the NI are compared using a benchmark suite of 10 streaming applications. The best performing NI configuration shows an average speedup of 20 for a CoreVA-MPSoC with 32 CPUs compared to a single CPU. Furthermore, we present physical implementation results using a 28 nm FD-SOI standard cell technology. A hierarchical MPSoC with 8 CPU clusters and 4 CPUs in each cluster running at 800MHz requires an area of 4.56mm2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Derailments due to lateral collisions between heavy road vehicles and passenger trains at level crossings (LCs) are serious safety issues. A variety of countermeasures in terms of traffic laws, communication technology and warning devices are used for minimising LC accidents; however, innovative civil infrastructure solution is rare. This paper presents a study of the efficacy of guard rail system (GRS) to minimise the derailment potential of trains laterally collided by heavy road vehicles at LCs. For this purpose, a three-dimensional dynamic model of a passenger train running on a ballasted track fitted with guard rail subject to lateral impact caused by a road truck is formulated. This model is capable of predicting the lateral collision-induced derailments with and without GRS. Based on dynamic simulations, derailment prevention mechanism of the GRS is illustrated. Sensitivities of key parameters of the GRS, such as the flange way width, the installation height and contact friction, to the efficacy of GRS are reported. It is shown that guard rails can enhance derailment safety against lateral impacts at LCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives To examine the effects of overall level and timing of physical activity (PA) on changes from a healthy body mass index (BMI) category over 12 years in young adult women. Patients and Methods Participants in the Australian Longitudinal Study on Women's Health (younger cohort, born 1973-1978) completed surveys between 2000 (age 22-27 years) and 2012 (age 34-39 years). Physical activity was measured in 2000, 2003, 2006, and 2009 and was categorized as very low, low, active, or very active at each survey, and a cumulative PA score for this 9-year period was created. Logistic regression was used to examine relationships between PA accumulated across all surveys (cumulative PA model) and PA at each survey (critical periods PA model), with change in BMI category (from healthy to overweight or healthy to obese) from 2000 to 2012. Results In women with a healthy BMI in 2000, there were clear dose-response relationships between accumulated PA and transition to overweight (P=.03) and obesity (P<.01) between 2000 and 2012. The critical periods analysis indicated that very active levels of PA at the 2006 survey (when the women were 28-33 years old) and active or very active PA at the 2009 survey (age 31-36 years) were most protective against transitioning to overweight and obesity. Conclusion These findings confirm that maintenance of very high PA levels throughout young adulthood will significantly reduce the risk of becoming overweight or obese. There seems to be a critical period for maintaining high levels of activity at the life stage when many women face competing demands of caring for infants and young children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The third edition of the Australian Standard AS1742 Manual of Uniform Traffic Control Devices Part 7 provides a method of calculating the sighting distance required to safely proceed at passive level crossings based on the physics of moving vehicles. This required distance becomes greater with higher line speeds and slower, heavier vehicles so that it may return quite a long sighting distance. However, at such distances, there are also concerns around whether drivers would be able to reliably identify a train in order to make an informed decision regarding whether it would be safe to proceed across the level crossing. In order to determine whether drivers are able to make reliable judgements to proceed in these circumstances, this study assessed the distance at which a train first becomes identifiable to a driver as well as their, ability to detect the movement of the train. A site was selected in Victoria, and 36 participants with good visual acuity observed 4 trains in the 100-140 km/h range. While most participants could detect the train from a very long distance (2.2 km on average), they could only detect that the train was moving at much shorter distances (1.3 km on average). Large variability was observed between participants, with 4 participants consistently detecting trains later than other participants. Participants tended to improve in their capacity to detect the presence of the train with practice, but a similar trend was not observed for detection of the movement of the train. Participants were consistently poor at accurately judging the approach speed of trains, with large underestimations at all investigated distances.