242 resultados para Filmic approach methods
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.
The Hofstedian approach : suggestions for a conceptual development of comparative journalism studies
Resumo:
In recent years there has been a burgeoning amount of research comparing journalistic practices in a wide range of countries around the world. Much of this literature has tended to focus on identifying what the similarities and differences between these different journalistic cultures are. Most importantly, research has focused on answering the question of whether, particularly in the age of globalisation, ‘a journalistic culture’ may exist. While there has been some evidence that there may indeed be a convergence of journalistic cultures, studies have at the same time found that important differences still persist. However, most of the literature has so far still tended to concentrate purely on the differences and similarities, without examining in detail why these exist. In this context, the author argues that employing a cross-cultural approach rooted in anthropology can at least partially trace the development of particularly the differences by linking them to the wider concept of cultural differences between countries. Specifically the paper here evaluates the usefulness of applying the value systems appraoch, as designed by Dutch anthropologist Geert Hofstede, to journalism research. By examining some of the few studies that have employed Hofstede’s approach, the paper argues that value systems can provide a classification on a conceptual level for investigating how journalism is practiced around the world. In the light of complaints in the Asia-Pacific region that the imported Western models of journalism are not in line with cultural values, this approach can also provide some basis from which to develop future approaches to journalism education.
Resumo:
These lecture notes describe the use and implementation of a framework in which mathematical as well as engineering optimisation problems can be analysed. The foundations of the framework and algorithms described -Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEAs) - lie upon traditional evolution strategies and incorporate the concepts of a multi-objective optimisation, hierarchical topology, asynchronous evaluation of candidate solutions , parallel computing and game strategies. In a step by step approach, the numerical implementation of EAs and HAPEAs for solving multi criteria optimisation problems is conducted providing the reader with the knowledge to reproduce these hand on training in his – her- academic or industrial environment.
Resumo:
Objective To assess the usability and validity of the Primary Care Practice Improvement Tool (PC-PIT), a practice performance improvement tool based on 13 key elements identified by a systematic review. It was co-created with a range of partners and designed specifically for primary health care. Design This pilot study examined the PC-PIT using a formative assessment framework and mixed-methods research design. Setting and participants Six high-functioning general practices in Queensland, Australia, between February and July 2013. A total of 28 staff participated — 10 general practitioners, six practice or community nurses, 12 administrators (four practice managers; one business manager and eight reception or general administrative staff). Main outcome measures Readability, content validity and staff perceptions of the PC-PIT. Results The PC-PIT offers an appropriate and acceptable approach to internal quality improvement in general practice. Quantitative assessment scores and qualitative data from all staff identified two areas in which the PC-PIT required modification: a reduction in the indicative reading age, and simplification of governance-related terms and concepts. Conclusion The PC-PIT provides an innovative approach to address the complexity of organisational improvement in general practice and primary health care. This initial validation will be used to develop a suite of supporting, high-quality and free-to-access resources to enhance the use of the PC-PIT in general practice. Based on these findings, a national trial is now underway.
Resumo:
Much has been written about transferring class materials and teaching techniques to digital platforms, but less has been written about applying heuristic organizing constructs in the same manner. With the transformation of learning ecologies over the past decades as well as requirements to adjust to constantly shifting digital tools and environments, the challenges for learning facilitators are to readily adapt and change, as well as to engage a changing learner demographic. However, most importantly is to engage most effectively with learners in these online environments. This article reviews the existing literature in the heuristic construct of academagogy [1] and applies a case study methodology to discussion of the first application of academagogy to the online delivery of an undergraduate design unit. Through a focus on effective teaching and learning techniques, the transfer from face-to-face (f2f) to the digital realm is explored through four main focal points: Tools for teaching, teaching and learning, communicating with students, and effective teaching methods. These four focal points are then used to discuss ways to meet the challenges of teaching online including how they create new dimensions in teaching practice and how the digital experience changes learning experiences. The article concludes with reflection and consolidation of the similarities and differences between the face-to-face and digital deliveries, and by suggesting changes to the academagogic heuristic to enable its use more easily in a digital space.
Resumo:
Optimisation is a fundamental step in the turbine design process, especially in the development of non-classical designs of radial-inflow turbines working with high-density fluids in low-temperature Organic Rankine Cycles (ORCs). The present work discusses the simultaneous optimisation of the thermodynamic cycle and the one-dimensional design of radial-inflow turbines. In particular, the work describes the integration between a 1D meanline preliminary design code adapted to real gases and the performance estimation approach for radial-inflow turbines in an established ORC cycle analysis procedure. The optimisation approach is split in two distinct loops; the inner operates on the 1D design based on the parameters received from the outer loop, which optimises the thermodynamic cycle. The method uses parameters including brine flow rate, temperature and working fluid, shifting assumptions such as head and flow coefficients into the optimisation routine. The discussed design and optimisation method is then validated against published benchmark cases. Finally, using the same conditions, the coupled optimisation procedure is extended to the preliminary design of a radial-inflow turbine with R143a as working fluid in realistic geothermal conditions and compared against results from commercially-available software RITAL from Concepts-NREC.
Resumo:
Purpose To establish whether the use of a passive or active technique of planning target volume (PTV) definition and treatment methods for non-small cell lung cancer (NSCLC) deliver the most effective results. This literature review assesses the advantages and disadvantages in recent studies of each, while assessing the validity of the two approaches for planning and treatment. Methods A systematic review of literature focusing on the planning and treatment of radiation therapy to NSCLC tumours. Different approaches which have been published in recent articles are subjected to critical appraisal in order to determine their relative efficacy. Results Free-breathing (FB) is the optimal method to perform planning scans for patients and departments, as it involves no significant increase in cost, workload or education. Maximum intensity projection (MIP) is the fastest form of delineation, however it is noted to be less accurate than the ten-phase overlap approach for computed tomography (CT). Although gating has proven to reduce margins and facilitate sparing of organs at risk, treatment times can be longer and planning time can be as much as 15 times higher for intensity modulated radiation therapy (IMRT). This raises issues with patient comfort and stabilisation, impacting on the chance of geometric miss. Stereotactic treatments can take up to 3 hours to treat, along with increases in planning and treatment, as well as the additional hardware, software and training required. Conclusion Four-dimensional computed tomography (4DCT) is superior to 3DCT, with the passive FB approach for PTV delineation and treatment optimal. Departments should use a combination of MIP with visual confirmation ensuring coverage for stage 1 disease. Stages 2-3 should be delineated using ten-phases overlaid. Stereotactic and gated treatments for early stage disease should be used accordingly; FB-IMRT is optimal for latter stage disease.
Resumo:
We consider a discrete agent-based model on a one-dimensional lattice, where each agent occupies L sites and attempts movements over a distance of d lattice sites. Agents obey a strict simple exclusion rule. A discrete-time master equation is derived using a mean-field approximation and careful probability arguments. In the continuum limit, nonlinear diffusion equations that describe the average agent occupancy are obtained. Averaged discrete simulation data are generated and shown to compare very well with the solution to the derived nonlinear diffusion equations. This framework allows us to approach a lattice-free result using all the advantages of lattice methods. Since different cell types have different shapes and speeds of movement, this work offers insight into population-level behavior of collective cellular motion.
Resumo:
Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.
Resumo:
This thesis addresses the topic of real-time decision making by driverless (autonomous) city vehicles, i.e. their ability to make appropriate driving decisions in non-simplified urban traffic conditions. After addressing the state of research, and explaining the research question, the thesis presents solutions for the subcomponents which are relevant for decision making with respect to information input (World Model), information output (Driving Maneuvers), and the real-time decision making process. TheWorld Model is a software component developed to fulfill the purpose of collecting information from perception and communication subsystems, maintaining an up-to-date view of the vehicle’s environment, and providing the required input information to the Real-Time Decision Making subsystem in a well-defined, and structured way. The real-time decision making process consists of two consecutive stages. While the first decision making stage uses a Petri net to model the safetycritical selection of feasible driving maneuvers, the second stage uses Multiple Criteria Decision Making (MCDM) methods to select the most appropriate driving maneuver, focusing on fulfilling objectives related to efficiency and comfort. The complex task of autonomous driving is subdivided into subtasks, called driving maneuvers, which represent the output (i.e. decision alternatives) of the real-time decision making process. Driving maneuvers are considered as implementations of closed-loop control algorithms, each capable of maneuvering the autonomous vehicle in a specific traffic situation. Experimental tests in both a 3D simulation and real-world experiments attest that the developed approach is suitable to deal with the complexity of real-world urban traffic situations.
Resumo:
Plant food materials have a very high demand in the consumer market and therefore, improved food products and efficient processing techniques are concurrently being researched in food engineering. In this context, numerical modelling and simulation techniques have a very high potential to reveal fundamentals of the underlying mechanisms involved. However, numerical modelling of plant food materials during drying becomes quite challenging, mainly due to the complexity of the multiphase microstructure of the material, which undergoes excessive deformations during drying. In this regard, conventional grid-based modelling techniques have limited applicability due to their inflexible grid-based fundamental limitations. As a result, meshfree methods have recently been developed which offer a more adaptable approach to problem domains of this nature, due to their fundamental grid-free advantages. In this work, a recently developed meshfree based two-dimensional plant tissue model is used for a comparative study of microscale morphological changes of several food materials during drying. The model involves Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) to represent fluid and solid phases of the cellular structure. Simulation are conducted on apple, potato, carrot and grape tissues and the results are qualitatively and quantitatively compared and related with experimental findings obtained from the literature. The study revealed that cellular deformations are highly sensitive to cell dimensions, cell wall physical and mechanical properties, middle lamella properties and turgor pressure. In particular, the meshfree model is well capable of simulating critically dried tissues at lower moisture content and turgor pressure, which lead to cell wall wrinkling. The findings further highlighted the potential applicability of the meshfree approach to model large deformations of the plant tissue microstructure during drying, providing a distinct advantage over the state of the art grid-based approaches.
Resumo:
This thesis developed a high preforming alternative numerical technique to investigate microscale morphological changes of plant food materials during drying. The technique is based on a novel meshfree method, and is more capable of modeling large deformations of multiphase problem domains, when compared with conventional grid-based numerical modeling techniques. The developed cellular model can effectively replicate dried tissue morphological changes such as shrinkage and cell wall wrinkling, as influenced by moisture reduction and turgor loss.
Resumo:
Existing research and best practice were utilized to develop the Project Management, Stakeholder Engagement and Change Facilitation (PSC) approach to road safety infrastructure projects. Two case studies involving Queensland Transport and Main Roads demonstrated that use of the PSC has potential to create synergies for projects undertaken by multi-disciplinary road safety groups, and to complement Safe System projects and philosophy. The case studies were the North West Road Safety Alliance project, and the implementation of Road Safety Audit policy, and utilised a mix of qualitative and quantitative methods including interviews and surveys.
Resumo:
This paper demonstrates a renewed procedure for the quantification of surface-enhanced Raman scattering (SERS) enhancement factors with improved precision. The principle of this method relies on deducting the resonance Raman scattering (RRS) contribution from surface-enhanced resonance Raman scattering (SERRS) to end up with the surface enhancement (SERS) effect alone. We employed 1,8,15,22-tetraaminophthalocyanato-cobalt(II) (4α-CoIITAPc), a resonance Raman- and electrochemically redox-active chromophore, as a probe molecule for RRS and SERRS experiments. The number of 4α-CoIITAPc molecules contributing to RRS and SERRS phenomena on plasmon inactive glassy carbon (GC) and plasmon active GC/Au surfaces, respectively, has been precisely estimated by cyclic voltammetry experiments. Furthermore, the SERS substrate enhancement factor (SSEF) quantified by our approach is compared with the traditionally employed methods. We also demonstrate that the present approach of SSEF quantification can be applied for any kind of different SERS substrates by choosing an appropriate laser line and probe molecule.