871 resultados para Three dimensions


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes.   Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Experiments with ultracold atoms in optical lattice have become a versatile testing ground to study diverse quantum many-body Hamiltonians. A single-band Bose-Hubbard (BH) Hamiltonian was first proposed to describe these systems in 1998 and its associated quantum phase-transition was subsequently observed in 2002. Over the years, there has been a rapid progress in experimental realizations of more complex lattice geometries, leading to more exotic BH Hamiltonians with contributions from excited bands, and modified tunneling and interaction energies. There has also been interesting theoretical insights and experimental studies on “un- conventional” Bose-Einstein condensates in optical lattices and predictions of rich orbital physics in higher bands. In this thesis, I present our results on several multi- band BH models and emergent quantum phenomena. In particular, I study optical lattices with two local minima per unit cell and show that the low energy states of a multi-band BH Hamiltonian with only pairwise interactions is equivalent to an effec- tive single-band Hamiltonian with strong three-body interactions. I also propose a second method to create three-body interactions in ultracold gases of bosonic atoms in a optical lattice. In this case, this is achieved by a careful cancellation of two contributions in the pair-wise interaction between the atoms, one proportional to the zero-energy scattering length and a second proportional to the effective range. I subsequently study the physics of Bose-Einstein condensation in the second band of a double-well 2D lattice and show that the collision aided decay rate of the con- densate to the ground band is smaller than the tunneling rate between neighboring unit cells. Finally, I propose a numerical method using the discrete variable repre- sentation for constructing real-valued Wannier functions localized in a unit cell for optical lattices. The developed numerical method is general and can be applied to a wide array of optical lattice geometries in one, two or three dimensions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We have studied numerically the effect of quenched site dilution on a weak first-order phase transition in three dimensions. We have simulated the site diluted three-states Potts model studying in detail the secondorder region of its phase diagram. We have found that the n exponent is compatible with the one of the three-dimensional diluted Ising model, whereas the h exponent is definitely different.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the past twenty years brand loyalty has been an important topic for both marketing practitioners and academics. While practitioners have produced proprietary brand loyalty audit models, there has been little academic research to make transparent the methodology that underpins these audits and to enable practitioners to understand, develop and conduct their own audits. In this paper, we propose a framework for a brand loyalty audit that uses a tri-dimensional approach to brand loyalty, which includes behavioural loyalty and the two components of attitudinal loyalty: emotional and cognitive loyalty. In allowing for different levels and intensity of brand loyalty, this tri-dimensional approach is important from a managerial perspective. It means that loyalty strategies that arise from a brand audit can be made more effective by targeting the market segments that demonstrate the most appropriate combination of brand loyalty components. We propose a matrix with three dimensions (emotional, cognitive and behavioural loyalty) and two levels (high and low loyalty) to facilitate a brand loyalty audit. To demonstrate this matrix, we use the example of financial services, in particular a rewards-based credit card.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Work engagement is a recent application of positive psychology and refers to a positive, fulfilling, work-related state of mind characterized by vigor, dedication and absorption. Despite theoretical assumptions, there is little published research on work engagement, due primarily to its recent emergence as a psychological construct. Furthermore, examining work engagement among high-stress occupations, such as police, is useful because police officers are generally characterized as having a high level of work engagement. Previous research has identified job resources (e.g. social support) as antecedents of work engagement. However detailed evaluation of job demands as an antecedent of work engagement within high-stress occupations has been scarce. Thus our second aim was to test job demands (i.e. monitoring demands and problem-solving demands) and job resources (i.e. time control, method control, supervisory support, colleague support, and friend and family support) as antecedents of work engagement among police officers. Method: Data were collected via a self-report online survey from one Australian state police service (n = 1,419). Due to the high number of hypothesized antecedent variables, hierarchical multiple regression analysis was employed rather than structural equation modelling. Results: Work engagement reported by police officers was high. Female officers had significantly higher levels of work engagement than male officers, while officers at mid-level ranks (sergeant) reported the lowest levels of work engagement. Job resources (method control, supervisor support and colleague support) were significant antecedents of three dimensions of work engagement. Only monitoring demands were significant antecedent of the absorption. Conclusion: Having healthy and engaged police officers is important for community security and economic growth. This study identified some common factors which influence work engagement experienced by police officers. However, we also note that excessive work engagement can yield negative outcomes such as psychological distress.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An Interactive Installation with holographic 3D projections, satellite imagery, surround sound and intuitive body driven interactivity. Remnant (v.1) was commissioned by the 2010 TreeLine ecoArt event - an initiative of the Sunshine Coast Council and presented at a remnant block of subtropical rainforest called ‘Mary Cairncross Scenic Reserve’ - located 100kms north of Brisbane near the township of Maleny. V2 was later commissioned for KickArts Gallery, Cairns, re-presenting the work in a new open format which allowed audiences to both experience the original power of the work and to also understand the construction of the work's powerful illusory, visual spaces. This art-science project focused upon the idea of remnant landscapes - isolated blocks of forest (or other vegetation types) typically set within a patchwork quilt of surrounding farmed land. Participants peer into a mysterious, long tunnel of imagery whilst navigating entirely through gentle head movements - allowing them to both 'steer' in three dimensions and also 'alight', as a butterfly might, upon a sector of landscape - which in turn reveals an underlying 'landscape of mind'. The work challenges audiences to re-imagine our conceptions of country in ways that will lead us to better reconnect and sustain today’s heavily divided landscapes. The research field involved developing new digital image projection methods, alternate embodied interaction and engagement strategies for eco-political media arts practice. The context was the creation of improved embodied and improvisational experiences for participants, further informed by ‘eco-philosophical’ and sustainment theories. By engaging with deep conceptions of connectivity between apparently disparate elements, the work considered novel strategies for fostering new desires, for understanding and re-thinking the requisite physical and ecological links between ‘things’ that have been historically shattered. The methodology was primarily practice-led and in concert with underlying theories. The work’s knowledge contribution was to question how new media interactive experience and embodied interaction might prompt participants to reflect upon appropriate resources and knowledges required to generate this substantive desire for new approaches to sustainment. This accentuated through the power of learning implied by the works' strongly visual and kinaesthetic interface (i.e. the tunnel of imagery and the head and torso operated navigation). The work was commissioned by the 2010 TreeLine ecoArt event - an initiative of the Sunshine Coast Council and the second version was commissioned by Kickarts Gallery, Cairns, specifically funded by a national optometrist chain. It was also funded in development by Arts Queensland and reviewed in Realtime.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The focus of this study is the celebration of Eucharist in Catholic primary schools within the Archdiocese of Brisbane. The context of the contemporary Australian Catholic primary school embodies certain 'problematical realities' in relation to the time-honoured way in which school Eucharistic rituals have been celebrated. These contemporary realities raise a number of issues that impact on school celebrations of Eucharist. The purpose of this study is to explore administrators' differing conceptions of school Eucharistic rituals in an attempt to investigate some of these issues and assist members of individual school communities as they strive to make celebrations of Eucharist appropriate and meaningful for the group gathered. The phenomenographic research approach was adopted, as it is well suited to the purpose of this study and the nature of the research question. Phenomenography is essentially a study of variation. It attempts to map the 'whole' phenomenon under investigation by describing on equal terms all conceptions of the phenomenon and establishing an ordered relationship among them. The purpose of this study and the nature of the research question necessitate an approach that allows the identification and description of the different ways in which administrators' experience school Eucharistic rituals. Accordingly, phenomenography was selected. Members of the Administration Team, namely the principal, the APRE (Assistant to the Principal Religious Education) and, in larger primary schools, the AP A (Assistant to the Principal Administration) share responsibility for leading change in Catholic primary schools in the Archdiocese of Brisbane. In practice, however, principals delegate the role of leading the development of the school's religion program and providing leadership in the religious life of the school community to the APRE (Brisbane Catholic Education, 1997). Informants in this study are nineteen APREs from a variety of Catholic primary schools in the Archdiocese of Brisbane. These APREs come from schools across the archdiocese, rather than from within one particular region. Several significant findings resulted from this study. Firstly, the data show that there are significant differences in how APREs' experience school Eucharistic rituals, although the number of these qualitatively different conceptions is quite limited. The study identifies and describes six distinct yet related conceptions of school Eucharistic rituals. The logical relationship among these conceptions (the outcome space) is presented in the form of a diagram with accompanying explication. The variation among the conceptions is best understood and described in terms of three dimensions of the role of Eucharist in the Catholic primary school and is represented on the model of the outcome space. Individual transcripts suggest that individual APREs tend to emphasise some conceptions more than others. It is the contention of the present study that change in the practice of school Eucharistic rituals is unlikely to occur until all of a school community's conceptions are brought out into the open and articulated. As leaders of change, APREs need to be alerted to their own biases and become aware of alternative ways of conceiving school Eucharistic ritual. It is proposed that the different categories of description and dimensions, represented by the model of the outcome space, can be used to help in the process of articulating a school community's conceptions of Eucharist, with the APRE as facilitator of this process. As a result, the school community develops a better understanding of why their particular school does what it does in relation to school Eucharistic rituals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Six sigma has proven itself as a major quality initiative in the last two decades. It is a philosophy which provides a systematic approach to applying numerous tools in the framework of several quality improvement methodologies. The most widely used six sigma methodology is DMAIC, which is best suited for improving existing processes. In order to build quality into the product or service, a proactive approach like Design for Six Sigma (DFSS) is required. This paper provides an overview of DFSS, product innovation, and service innovation. The emphasis is on comparing how DFSS is applied differently in product and service innovation. This paper contributes by analysing the existing literature on DFSS in product and service innovation. The major findings are that the DFSS approach in services and products can be differentiated along the following three dimensions: methodology, characteristics, and technology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research examines why and how brand owners in China adopt and use mobile media in marketing campaigns to deliver co-creation brand experiences and build consumer relationships. China represents an interesting case to study as it has leapfrogged into the age of consumer society and mobile media adoption. As the largest mobile market globally, it has experienced the intensity of mobile technology diffusion; and with it the rise of mobile consumer culture and participatory culture. Further, the rising individualism and the socio-cultural heritage in collectivism serve as a structuring tension in how mobile media is leveraged in marketing to cater to consumers' desires for individuality and social interaction. First, through expert interviews guided by the technology-organization-environment (TOE) framework (Tornatzky & Fleischer, 1990) as well as integrating innovation diffusion theory (E. Rogers, 2003), this research attempts to fill the gap of theoretical application in mobile marketing adoption at the firm level in China, and unravel the adoption factors of mobile marketing by brand owners in China. In total, 27 semi-structured interviews were conducted with key industry informants from mobile agencies, traditional agencies, venture capital firms, mobile content and service providers, mobile portals, and marketing management at brand owners. Second, based on case studies in China, this research investigates the use of mobile marketing to facilitate innovative co-creation of brand experience to cater to both individualistic as well as collective tendencies and desires amongst Chinese consumers. Through multiple case studies of the campaigns conducted by Nokia, Clean & Clear, and The North Face, and informed by in-depth interviews and document analysis, this research analyses the role of mobile media in marketing campaigns along three dimensions: the role of mobile media in content generation and consumption, the centrality of mobile media as text, tools or platforms; and the interactive environment. Specifically, the cases are organized along the spectrum from user-generated content to corporate-generated content, mobile media's role from being supplementary to it being central, and from a virtual environment to a hybrid environment. Overall, these cases demonstrate how brand owners adapt mobile media as text, tools, platforms, and environments to deliver co-creation brand experiences exploiting both individualistic as well as collective tendencies and desires amongst Chinese consumers. This research contributes to the literature on firm adoption of mobile marketing, and the role of the mobile media in facilitating co-creation experiences for Chinese consumers. It develops a model of the technological, organizational and environmental factors influencing mobile marketing adoption by firms, and provides a model explaining the role of mobile media in facilitating brand experience co-creation. The findings also demonstrate that mobile media can be leveraged to facilitate co-creation brand experience to generate added value; and meanwhile cater to both the rising individualism and the deep-seated collectivism of Chinese consumers. Empirically, it assists industry practitioners in understanding the adoption of mobile marketing in China, especially those on the supply side in order to improve their offerings and propositions. It also assists brand owners and agencies in designing their mobile marketing strategies to build consumer relationships in China.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The growth of solid tumours beyond a critical size is dependent upon angiogenesis, the formation of new blood vessels from an existing vasculature. Tumours may remain dormant at microscopic sizes for some years before switching to a mode in which growth of a supportive vasculature is initiated. The new blood vessels supply nutrients, oxygen, and access to routes by which tumour cells may travel to other sites within the host (metastasize). In recent decades an abundance of biological research has focused on tumour-induced angiogenesis in the hope that treatments targeted at the vasculature may result in a stabilisation or regression of the disease: a tantalizing prospect. The complex and fascinating process of angiogenesis has also attracted the interest of researchers in the field of mathematical biology, a discipline that is, for mathematics, relatively new. The challenge in mathematical biology is to produce a model that captures the essential elements and critical dependencies of a biological system. Such a model may ultimately be used as a predictive tool. In this thesis we examine a number of aspects of tumour-induced angiogenesis, focusing on growth of the neovasculature external to the tumour. Firstly we present a one-dimensional continuum model of tumour-induced angiogenesis in which elements of the immune system or other tumour-cytotoxins are delivered via the newly formed vessels. This model, based on observations from experiments by Judah Folkman et al., is able to show regression of the tumour for some parameter regimes. The modelling highlights a number of interesting aspects of the process that may be characterised further in the laboratory. The next model we present examines the initiation positions of blood vessel sprouts on an existing vessel, in a two-dimensional domain. This model hypothesises that a simple feedback inhibition mechanism may be used to describe the spacing of these sprouts with the inhibitor being produced by breakdown of the existing vessel's basement membrane. Finally, we have developed a stochastic model of blood vessel growth and anastomosis in three dimensions. The model has been implemented in C++, includes an openGL interface, and uses a novel algorithm for calculating proximity of the line segments representing a growing vessel. This choice of programming language and graphics interface allows for near-simultaneous calculation and visualisation of blood vessel networks using a contemporary personal computer. In addition the visualised results may be transformed interactively, and drop-down menus facilitate changes in the parameter values. Visualisation of results is of vital importance in the communication of mathematical information to a wide audience, and we aim to incorporate this philosophy in the thesis. As biological research further uncovers the intriguing processes involved in tumourinduced angiogenesis, we conclude with a comment from mathematical biologist Jim Murray, Mathematical biology is : : : the most exciting modern application of mathematics.