891 resultados para Technicolor and Composite Models
Resumo:
The Mount Isa Basin is a new concept used to describe the area of Palaeo- to Mesoproterozoic rocks south of the Murphy Inlier and inappropriately described presently as the Mount Isa Inlier. The new basin concept presented in this thesis allows for the characterisation of basin-wide structural deformation, correlation of mineralisation with particular lithostratigraphic and seismic stratigraphic packages, and the recognition of areas with petroleum exploration potential. The northern depositional margin of the Mount Isa Basin is the metamorphic, intrusive and volcanic complex here referred to as the Murphy Inlier (not the "Murphy Tectonic Ridge"). The eastern, southern and western boundaries of the basin are obscured by younger basins (Carpentaria, Eromanga and Georgina Basins). The Murphy Inlier rocks comprise the seismic basement to the Mount Isa Basin sequence. Evidence for the continuity of the Mount Isa Basin with the McArthur Basin to the northwest and the Willyama Block (Basin) at Broken Hill to the south is presented. These areas combined with several other areas of similar age are believed to have comprised the Carpentarian Superbasin (new term). The application of seismic exploration within Authority to Prospect (ATP) 423P at the northern margin of the basin was critical to the recognition and definition of the Mount Isa Basin. The Mount Isa Basin is structurally analogous to the Palaeozoic Arkoma Basin of Illinois and Arkansas in southern USA but, as with all basins it contains unique characteristics, a function of its individual development history. The Mount Isa Basin evolved in a manner similar to many well described, Phanerozoic plate tectonic driven basins. A full Wilson Cycle is recognised and a plate tectonic model proposed. The northern Mount Isa Basin is defined as the Proterozoic basin area northwest of the Mount Gordon Fault. Deposition in the northern Mount Isa Basin began with a rift sequence of volcaniclastic sediments followed by a passive margin drift phase comprising mostly carbonate rocks. Following the rift and drift phases, major north-south compression produced east-west thrusting in the south of the basin inverting the older sequences. This compression produced an asymmetric epi- or intra-cratonic clastic dominated peripheral foreland basin provenanced in the south and thinning markedly to a stable platform area (the Murphy Inlier) in the north. The fmal major deformation comprised east-west compression producing north-south aligned faults that are particularly prominent at Mount Isa. Potential field studies of the northern Mount Isa Basin, principally using magnetic data (and to a lesser extent gravity data, satellite images and aerial photographs) exhibit remarkable correlation with the reflection seismic data. The potential field data contributed significantly to the unravelling of the northern Mount Isa Basin architecture and deformation. Structurally, the Mount Isa Basin consists of three distinct regions. From the north to the south they are the Bowthorn Block, the Riversleigh Fold Zone and the Cloncurry Orogen (new names). The Bowthom Block, which is located between the Elizabeth Creek Thrust Zone and the Murphy Inlier, consists of an asymmetric wedge of volcanic, carbonate and clastic rocks. It ranges from over 10 000 m stratigraphic thickness in the south to less than 2000 min the north. The Bowthorn Block is relatively undeformed: however, it contains a series of reverse faults trending east-west that are interpreted from seismic data to be down-to-the-north normal faults that have been reactivated as thrusts. The Riversleigh Fold Zone is a folded and faulted region south of the Bowthorn Block, comprising much of the area formerly referred to as the Lawn Hill Platform. The Cloncurry Orogen consists of the area and sequences equivalent to the former Mount Isa Orogen. The name Cloncurry Orogen clearly distinguishes this area from the wider concept of the Mount Isa Basin. The South Nicholson Group and its probable correlatives, the Pilpah Sandstone and Quamby Conglomerate, comprise a later phase of now largely eroded deposits within the Mount Isa Basin. The name South Nicholson Basin is now outmoded as this terminology only applied to the South Nicholson Group unlike the original broader definition in Brown et al. (1968). Cored slimhole stratigraphic and mineral wells drilled by Amoco, Esso, Elf Aquitaine and Carpentaria Exploration prior to 1986, penetrated much of the stratigraphy and intersected both minor oil and gas shows plus excellent potential source rocks. The raw data were reinterpreted and augmented with seismic stratigraphy and source rock data from resampled mineral and petroleum stratigraphic exploration wells for this study. Since 1986, Comalco Aluminium Limited, as operator of a joint venture with Monument Resources Australia Limited and Bridge Oil Limited, recorded approximately 1000 km of reflection seismic data within the basin and drilled one conventional stratigraphic petroleum well, Beamesbrook-1. This work was the first reflection seismic and first conventional petroleum test of the northern Mount Isa Basin. When incorporated into the newly developed foreland basin and maturity models, a grass roots petroleum exploration play was recognised and this led to the present thesis. The Mount Isa Basin was seen to contain excellent source rocks coupled with potential reservoirs and all of the other essential aspects of a conventional petroleum exploration play. This play, although high risk, was commensurate with the enormous and totally untested petroleum potential of the basin. The basin was assessed for hydrocarbons in 1992 with three conventional exploration wells, Desert Creek-1, Argyle Creek-1 and Egilabria-1. These wells also tested and confrrmed the proposed basin model. No commercially viable oil or gas was encountered although evidence of its former existence was found. In addition to the petroleum exploration, indeed as a consequence of it, the association of the extensive base metal and other mineralisation in the Mount Isa Basin with hydrocarbons could not be overlooked. A comprehensive analysis of the available data suggests a link between the migration and possible generation or destruction of hydrocarbons and metal bearing fluids. Consequently, base metal exploration based on hydrocarbon exploration concepts is probably. the most effective technique in such basins. The metal-hydrocarbon-sedimentary basin-plate tectonic association (analogous to Phanerozoic models) is a compelling outcome of this work on the Palaeo- to Mesoproterozoic Mount lsa Basin. Petroleum within the Bowthom Block was apparently destroyed by hot brines that produced many ore deposits elsewhere in the basin.
Resumo:
Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.
Resumo:
Patients undergoing radiation therapy for cancer face a series of challenges that require support from a multidisciplinary team which includes radiation oncology nurses. However, the specific contribution of nursing, and the models of care that best support the delivery of nursing interventions in the radiotherapy setting, is not well described. In this case study, the Interaction Model of Client Health Behaviour and the associated principles of person-centred care were incorporated into a new model of care that was implemented in one radiation oncology setting in Brisbane, Australia. The new model of care was operationalised through a Primary Nursing/Collaborative Practice framework. To evaluate the impact of the new model for patients and health professionals, multiple sources of data were collected from patients and clinical staff prior to, during, and 18 months following introduction of the practice redesign. One cohort of patients and clinical staff completed surveys incorporating measures of key outcomes immediately prior to implementation of the model, while a second cohort of patients and clinical staff completed these same surveys 18 months following introduction of the model. In-depth interviews were also conducted with nursing, medical and allied health staff throughout the implementation phase to obtain a more comprehensive account of the processes and outcomes associated with implementing such a model. From the patients’ perspectives, this study demonstrated that, although adverse effects of radiotherapy continue to affect patient well-being, patients continue to be satisfied with nursing care in this specialty, and that they generally reported high levels of functioning despite undergoing a curative course of radiotherapy. From the health professionals’ perspective, there was evidence of attitudinal change by nursing staff within the radiotherapy department which reflected a greater understanding and appreciation of a more person-centred approach to care. Importantly, this case study has also confirmed that a range of factors need to be considered when redesigning nursing practice in the radiotherapy setting, as the challenges associated with changing traditional practices, ensuring multidisciplinary approaches to care, and resourcing a new model were experienced. The findings from this study suggest that the move from a relatively functional approach to a person-centred approach in the radiotherapy setting has contributed to some improvements in the provision of individualised and coordinated patient care. However, this study has also highlighted that primary nursing may be limited in its approach as a framework for patient care unless it is supported by a whole team approach, an appropriate supportive governance model, and sufficient resourcing. Introducing such a model thus requires effective education, preparation and ongoing support for the whole team. The challenges of providing care in the context of complex interdisciplinary relationships have been highlighted by this study. Aspects of this study may assist in planning further nursing interventions for patients undergoing radiotherapy for cancer, and continue to enhance the contribution of the radiation oncology nurse to improved patient outcomes.
Resumo:
Background Leisure-time physical activity (LTPA) shows promise for reducing the risk of poor mental health in later life, although gender- and age-specific research is required to clarify this association. This study examined the concurrent and prospective relationships between both LTPA and walking with mental health in older women. Methods Community-dwelling women aged 73–78 years completed mailed surveys in 1999, 2002 and 2005 for the Australian Longitudinal Study on Women's Health. Respondents reported their weekly minutes of walking, moderate LTPA and vigorous LTPA. Mental health was defined as the number of depression and anxiety symptoms, as assessed with the Goldberg Anxiety and Depression Scale (GADS). Multivariable linear mixed models, adjusted for socio-demographic and health-related variables, were used to examine associations between five levels of LTPA (none, very low, low, intermediate and high) and GADS scores. For women who reported walking as their only LTPA, associations between walking and GADS scores were also examined. Women who reported depression or anxiety in 1999 were excluded, resulting in data from 6653 women being included in these analyses. Results Inverse dose–response associations were observed between both LTPA and walking with GADS scores in concurrent and prospective models (p<0.001). Even low levels of LTPA and walking were associated with lowered scores. The lowest scores were observed in women reporting high levels of LTPA or walking. Conclusion The results support an inverse dose–response association between both LTPA and walking with mental health, over 3 years in older women without depression or anxiety.
Resumo:
Sustainability has been increasingly recognised as an integral part of highway infrastructure development. In practice however, the fact that financial return is still a project’s top priority for many, environmental aspects tend to be overlooked or considered as a burden, as they add to project costs. Sustainability and its implications have a far-reaching effect on each project over time. Therefore, with highway infrastructure’s long-term life span and huge capital demand, the consideration of environmental cost/ benefit issues is more crucial in life-cycle cost analysis (LCCA). To date, there is little in existing literature studies on viable estimation methods for environmental costs. This situation presents the potential for focused studies on environmental costs and issues in the context of life-cycle cost analysis. This paper discusses a research project which aims to integrate the environmental cost elements and issues into a conceptual framework for life cycle costing analysis for highway projects. Cost elements and issues concerning the environment were first identified through literature. Through questionnaires, these environmental cost elements will be validated by practitioners before their consolidation into the extension of existing and worked models of life-cycle costing analysis (LCCA). A holistic decision support framework is being developed to assist highway infrastructure stakeholders to evaluate their investment decision. This will generate financial returns while maximising environmental benefits and sustainability outcome.
Resumo:
Crash prediction models are used for a variety of purposes including forecasting the expected future performance of various transportation system segments with similar traits. The influence of intersection features on safety have been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes compared to other segments in the transportation system. The effects of left-turn lanes at intersections in particular have seen mixed results in the literature. Some researchers have found that left-turn lanes are beneficial to safety while others have reported detrimental effects on safety. This inconsistency is not surprising given that the installation of left-turn lanes is often endogenous, that is, influenced by crash counts and/or traffic volumes. Endogeneity creates problems in econometric and statistical models and is likely to account for the inconsistencies reported in the literature. This paper reports on a limited-information maximum likelihood (LIML) estimation approach to compensate for endogeneity between left-turn lane presence and angle crashes. The effects of endogeneity are mitigated using the approach, revealing the unbiased effect of left-turn lanes on crash frequency for a dataset of Georgia intersections. The research shows that without accounting for endogeneity, left-turn lanes ‘appear’ to contribute to crashes; however, when endogeneity is accounted for in the model, left-turn lanes reduce angle crash frequencies as expected by engineering judgment. Other endogenous variables may lurk in crash models as well, suggesting that the method may be used to correct simultaneity problems with other variables and in other transportation modeling contexts.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
Around the world, particularly in North America and Australia, urban sprawl combined with low density suburban development has caused serious accessibility and mobility problems, especially for those who do not own a motor vehicle or have access to public transportation services. Sustainable urban and transportation development is seen crucial in solving transportation disadvantage problems in urban settlements. However, current urban and transportation models have not been adequately addressed unsustainable urban transportation problems that transportation disadvantaged groups overwhelmingly encounter, and the negative impacts on the disadvantaged have not been effectively considered. Transportation disadvantaged is a multi-dimensional problem that combines demographic, spatial and transportation service dimensions. Nevertheless, most transportation models focusing on transportation disadvantage only employ demographic and transportation service dimensions and do not take spatial dimension into account. This paper aims to investigate the link between sustainable urban and transportation development and spatial dimension of the transportation disadvantage problem. The paper, for that purpose, provides a thorough review of the literature and identifies a set of urban, development and policy characteristics to define spatial dimension of the transportation disadvantage problem. This paper presents an overview of these urban, development and policy characteristics that have significant relationships with sustainable urban and transportation development and travel inability, which are also useful in determining transportation disadvantaged populations.
Resumo:
In this work, we investigate and compare the Maxwell–Stefan and Nernst–Planck equations for modeling multicomponent charge transport in liquid electrolytes. Specifically, we consider charge transport in the Li+/I−/I3−/ACN ternary electrolyte originally found in dye-sensitized solar cells. We employ molecular dynamics simulations to obtain the Maxwell–Stefan diffusivities for this electrolyte. These simulated diffusion coefficients are used in a multicomponent charge transport model based on the Maxwell– Stefan equations, and this is compared to a Nernst–Planck based model which employs binary diffusion coefficients sourced from the literature. We show that significant differences between the electrolyte concentrations at electrode interfaces, as predicted by the Maxwell–Stefan and Nernst–Planck models, can occur. We find that these differences are driven by a pressure term that appears in the Maxwell–Stefan equations. We also investigate what effects the Maxwell–Stefan diffusivities have on the simulated charge transport. By incorporating binary diffusivities found in the literature into the Maxwell–Stefan framework, we show that the simulated transient concentration profiles depend on the diffusivities; however, the simulated equilibrium profiles remain unaffected.
Resumo:
Exclusion processes on a regular lattice are used to model many biological and physical systems at a discrete level. The average properties of an exclusion process may be described by a continuum model given by a partial differential equation. We combine a general class of contact interactions with an exclusion process. We determine that many different types of contact interactions at the agent-level always give rise to a nonlinear diffusion equation, with a vast variety of diffusion functions D(C). We find that these functions may be dependent on the chosen lattice and the defined neighborhood of the contact interactions. Mild to moderate contact interaction strength generally results in good agreement between discrete and continuum models, while strong interactions often show discrepancies between the two, particularly when D(C) takes on negative values. We present a measure to predict the goodness of fit between the discrete and continuous model, and thus the validity of the continuum description of a motile, contact-interacting population of agents. This work has implications for modeling cell motility and interpreting cell motility assays, giving the ability to incorporate biologically realistic cell-cell interactions and develop global measures of discrete microscopic data.
Resumo:
This research assesses the potential impact of weekly weather variability on the incidence of cryptosporidiosis disease using time series zero-inflated Poisson (ZIP) and classification and regression tree (CART) models. Data on weather variables, notified cryptosporidiosis cases and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Both time series ZIP and CART models show a clear association between weather variables (maximum temperature, relative humidity, rainfall and wind speed) and cryptosporidiosis disease. The time series CART models indicated that, when weekly maximum temperature exceeded 31°C and relative humidity was less than 63%, the relative risk of cryptosporidiosis rose by 13.64 (expected morbidity: 39.4; 95% confidence interval: 30.9–47.9). These findings may have applications as a decision support tool in planning disease control and risk management programs for cryptosporidiosis disease.
Resumo:
The travel and hospitality industry is one which relies especially crucially on word of mouth, both at the level of overall destinations (Australia, Queensland, Brisbane) and at the level of travellers’ individual choices of hotels, restaurants, sights during their trips. The provision of such word-of-mouth information has been revolutionised over the past decade by the rise of community-based Websites which allow their users to share information about their past and future trips and advise one another on what to do or what to avoid during their travels. Indeed, the impact of such user-generated reviews, ratings, and recommendations sites has been such that established commercial travel advisory publishers such as Lonely Planet have experienced a pronounced downturn in sales ¬– unless they have managed to develop their own ways of incorporating user feedback and contributions into their publications. This report examines the overall significance of ratings and recommendation sites to the travel industry, and explores the community, structural, and business models of a selection of relevant ratings and recommendations sites. We identify a range of approaches which are appropriate to the respective target markets and business aims of these organisations, and conclude that there remain significant opportunities for further operators especially if they aim to cater for communities which are not yet appropriately served by specific existing sites. Additionally, we also point to the increasing importance of connecting stand-alone ratings and recommendations sites with general social media spaces like Facebook, Twitter, and LinkedIn, and of providing mobile interfaces which enable users to provide updates and ratings directly from the locations they happen to be visiting. In this report, we profile the following sites: * TripAdvisor, the international market leader for travel ratings and recommendations sites, with a membership of some 11 million users; * IgoUgo, the other leading site in this field, which aims to distinguish itself from the market leader by emphasising the quality of its content; * Zagat, a long-established publisher of restaurant guides which has translated its crowdsourcing model from the offline to the online world; * Lonely Planet’s Thorn Tree site, which attempts to respond to the rise of these travel communities by similarly harnessing user-generated content; * Stayz, which attempts to enhance its accommodation search and booking services by incorporating ratings and reviews functionality; and * BigVillage, an Australian-based site attempting to cater for a particularly discerning niche of travellers; * Dopplr, which connects travel and social networking in a bid to pursue the lucrative market of frequent and business travellers; * Foursquare, which builds on its mobile application to generate a steady stream of ‘check-ins’ and recommendations for hospitality and other services around the world; * Suite 101, which uses a revenue-sharing model to encourage freelance writers to contribute travel writing (amongst other genres of writing); * Yelp, the global leader in general user-generated product review and recommendation services. In combination, these profiles provide an overview of current developments in the travel ratings and recommendations space (and beyond), and offer an outlook for further possibilities. While no doubt affected by the global financial downturn and the reduction in travel that it has caused, travel ratings and recommendations remain important – perhaps even more so if a reduction in disposable income has resulted in consumers becoming more critical and discerning. The aggregated word of mouth from many tens of thousands of travellers which these sites provide certainly has a substantial influence on their users. Using these sites to research travel options has now become an activity which has spread well beyond the digirati. The same is true also for many other consumer industries, especially where there is a significant variety of different products available – and so, this report may also be read as a case study whose findings are able to be translated, mutatis mutandis, to purchasing decisions from household goods through consumer electronics to automobiles.
Resumo:
Using sculpture and drawing as my primary methods of investigation, this research explores ways of shifting the emphasis of my creative visual arts practice from object to process whilst still maintaining a primacy of material outcomes. My motivation was to locate ways of developing a sustained practice shaped as much by new works, as by a creative flow between works. I imagined a practice where a logic of structure within discrete forms and a logic of the broader practice might be developed as mutually informed processes. Using basic structural components of multiple wooden curves and linear modes of deployment – in both sculptures and drawings – I have identified both emergence theory and the image of rhizomic growth (Deleuze and Guattari, 1987) as theoretically integral to this imagining of a creative practice, both in terms of critiquing and developing works. Whilst I adopt a formalist approach for this exegesis, the emergence and rhizome models allow it to work as a critique of movement, of becoming and changing, rather than merely a formalism of static structure. In these models, therefore, I have identified a formal approach that can be applied not only to objects, but to practice over time. The thorough reading and application of these ontological models (emergence and rhizome) to visual arts practice, in terms of processes, objects and changes, is the primary contribution of this thesis. The works that form the major component of the research develop, reflect and embody these notions of movement and change.
Resumo:
This research shows that gross pollutant traps (GPTs) continue to play an important role in preventing visible street waste—gross pollutants—from contaminating the environment. The demand for these GPTs calls for stringent quality control and this research provides a foundation to rigorously examine the devices. A novel and comprehensive testing approach to examine a dry sump GPT was developed. The GPT is designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. This device has not been previously investigated. Apart from the review of GPTs and gross pollutant data, the testing approach includes four additional aspects to this research, which are: field work and an historical overview of street waste/stormwater pollution, calibration of equipment, hydrodynamic studies and gross pollutant capture/retention investigations. This work is the first comprehensive investigation of its kind and provides valuable practical information for the current research and any future work pertaining to the operations of GPTs and management of street waste in the urban environment. Gross pollutant traps—including patented and registered designs developed by industry—have specific internal configurations and hydrodynamic separation characteristics which demand individual testing and performance assessments. Stormwater devices are usually evaluated by environmental protection agencies (EPAs), professional bodies and water research centres. In the USA, the American Society of Civil Engineers (ASCE) and the Environmental Water Resource Institute (EWRI) are examples of professional and research organisations actively involved in these evaluation/verification programs. These programs largely rely on field evaluations alone that are limited in scope, mainly for cost and logistical reasons. In Australia, evaluation/verification programs of new devices in the stormwater industry are not well established. The current limitations in the evaluation methodologies of GPTs have been addressed in this research by establishing a new testing approach. This approach uses a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The physical model consisted of a 50% scale model GPT rig with screen blockages varying from 0 to 100%. This rig was placed in a 20 m flume and various inlet and outflow operating conditions were modelled on observations made during the field monitoring of GPTs. Due to infrequent cleaning, the retaining screens inside the GPTs were often observed to be blocked with organic matter. Blocked screens can radically change the hydrodynamic and gross pollutant capture/retention characteristics of a GPT as shown from this research. This research involved the use of equipment, such as acoustic Doppler velocimeters (ADVs) and dye concentration (Komori) probes, which were deployed for the first time in a dry sump GPT. Hence, it was necessary to rigorously evaluate the capability and performance of these devices, particularly in the case of the custom made Komori probes, about which little was known. The evaluation revealed that the Komori probes have a frequency response of up to 100 Hz —which is dependent upon fluid velocities—and this was adequate to measure the relevant fluctuations of dye introduced into the GPT flow domain. The outcome of this evaluation resulted in establishing methodologies for the hydrodynamic measurements and gross pollutant capture/retention experiments. The hydrodynamic measurements consisted of point-based acoustic Doppler velocimeter (ADV) measurements, flow field particle image velocimetry (PIV) capture, head loss experiments and computational fluid dynamics (CFD) simulation. The gross pollutant capture/retention experiments included the use of anthropogenic litter components, tracer dye and custom modified artificial gross pollutants. Anthropogenic litter was limited to tin cans, bottle caps and plastic bags, while the artificial pollutants consisted of 40 mm spheres with a range of four buoyancies. The hydrodynamic results led to the definition of global and local flow features. The gross pollutant capture/retention results showed that when the internal retaining screens are fully blocked, the capture/retention performance of the GPT rapidly deteriorates. The overall results showed that the GPT will operate efficiently until at least 70% of the screens are blocked, particularly at high flow rates. This important finding indicates that cleaning operations could be more effectively planned when the GPT capture/retention performance deteriorates. At lower flow rates, the capture/retention performance trends were reversed. There is little difference in the poor capture/retention performance between a fully blocked GPT and a partially filled or empty GPT with 100% screen blockages. The results also revealed that the GPT is designed with an efficient high flow bypass system to avoid upstream blockages. The capture/retention performance of the GPT at medium to high inlet flow rates is close to maximum efficiency (100%). With regard to the design appraisal of the GPT, a raised inlet offers a better capture/retention performance, particularly at lower flow rates. Further design appraisals of the GPT are recommended.