333 resultados para potential models
Resumo:
Mainstream business process modelling techniques promote a design paradigm wherein the activities to be performed within a case, together with their usual execution order, form the backbone of a process model, on top of which other aspects are anchored. This paradigm, while eective in standardised and production-oriented domains, shows some limitations when confronted with processes where case-by-case variations and exceptions are the norm. In this thesis we develop the idea that the eective design of exible process models calls for an alternative modelling paradigm, one in which process models are modularised along key business objects, rather than along activity decompositions. The research follows a design science method, starting from the formulation of a research problem expressed in terms of requirements, and culminating in a set of artifacts that have been devised to satisfy these requirements. The main contributions of the thesis are: (i) a meta-model for object-centric process modelling incorporating constructs for capturing exible processes; (ii) a transformation from this meta-model to an existing activity-centric process modelling language, namely YAWL, showing the relation between object-centric and activity-centric process modelling approaches; and (iii) a Coloured Petri Net that captures the semantics of the proposed meta-model. The meta-model has been evaluated using a framework consisting of a set of work ow patterns. Moreover, the meta-model has been embodied in a modelling tool that has been used to capture two industrial scenarios.
Resumo:
Low back pain is an increasing problem in industrialised countries and although it is a major socio-economic problem in terms of medical costs and lost productivity, relatively little is known about the processes underlying the development of the condition. This is in part due to the complex interactions between bone, muscle, nerves and other soft tissues of the spine, and the fact that direct observation and/or measurement of the human spine is not possible using non-invasive techniques. Biomechanical models have been used extensively to estimate the forces and moments experienced by the spine. These models provide a means of estimating the internal parameters which can not be measured directly. However, application of most of the models currently available is restricted to tasks resembling those for which the model was designed due to the simplified representation of the anatomy. The aim of this research was to develop a biomechanical model to investigate the changes in forces and moments which are induced by muscle injury. In order to accurately simulate muscle injuries a detailed quasi-static three dimensional model representing the anatomy of the lumbar spine was developed. This model includes the nine major force generating muscles of the region (erector spinae, comprising the longissimus thoracis and iliocostalis lumborum; multifidus; quadratus lumborum; latissimus dorsi; transverse abdominis; internal oblique and external oblique), as well as the thoracolumbar fascia through which the transverse abdominis and parts of the internal oblique and latissimus dorsi muscles attach to the spine. The muscles included in the model have been represented using 170 muscle fascicles each having their own force generating characteristics and lines of action. Particular attention has been paid to ensuring the muscle lines of action are anatomically realistic, particularly for muscles which have broad attachments (e.g. internal and external obliques), muscles which attach to the spine via the thoracolumbar fascia (e.g. transverse abdominis), and muscles whose paths are altered by bony constraints such as the rib cage (e.g. iliocostalis lumborum pars thoracis and parts of the longissimus thoracis pars thoracis). In this endeavour, a separate sub-model which accounts for the shape of the torso by modelling it as a series of ellipses has been developed to model the lines of action of the oblique muscles. Likewise, a separate sub-model of the thoracolumbar fascia has also been developed which accounts for the middle and posterior layers of the fascia, and ensures that the line of action of the posterior layer is related to the size and shape of the erector spinae muscle. Published muscle activation data are used to enable the model to predict the maximum forces and moments that may be generated by the muscles. These predictions are validated against published experimental studies reporting maximum isometric moments for a variety of exertions. The model performs well for fiexion, extension and lateral bend exertions, but underpredicts the axial twist moments that may be developed. This discrepancy is most likely the result of differences between the experimental methodology and the modelled task. The application of the model is illustrated using examples of muscle injuries created by surgical procedures. The three examples used represent a posterior surgical approach to the spine, an anterior approach to the spine and uni-lateral total hip replacement surgery. Although the three examples simulate different muscle injuries, all demonstrate the production of significant asymmetrical moments and/or reduced joint compression following surgical intervention. This result has implications for patient rehabilitation and the potential for further injury to the spine. The development and application of the model has highlighted a number of areas where current knowledge is deficient. These include muscle activation levels for tasks in postures other than upright standing, changes in spinal kinematics following surgical procedures such as spinal fusion or fixation, and a general lack of understanding of how the body adjusts to muscle injuries with respect to muscle activation patterns and levels, rate of recovery from temporary injuries and compensatory actions by other muscles. Thus the comprehensive and innovative anatomical model which has been developed not only provides a tool to predict the forces and moments experienced by the intervertebral joints of the spine, but also highlights areas where further clinical research is required.
Resumo:
Osteophytes form through the process of chondroid metamorphosis of fibrous tissue followed by endochondral ossification. Osteophytes have been found to consist of three different mesenchymal tissue regions including endochondral bone formation within cartilage residues, intra-membranous bone formation within fibrous tissue and bone formation within bone marrow spaces. All these features provide evidence of mesenchymal stem cells (MSC) involvement in osteophyte formation; nevertheless, it remains to be characterised. MSC from numerous mesenchymal tissues have been isolated but bone marrow remains the “ideal” due to the ease of ex vivo expansion and multilineage potential. However, the bone marrow stroma has a relatively low number of MSC, something that necessitates the need for long-term culture and extensive population doublings in order to obtain a sufficient number of cells for therapeutic applications. MSC in vitro have limited proliferative capacity and extensive passaging compromises differentiation potential. To overcome this barrier, tissue derived MSC are of strong interest for extensive study and characterisation, with a focus on their potential application in therapeutic tissue regeneration. To date, no MSC type cell has been isolated from osteophyte tissue, despite this tissue exhibiting all the hallmark features of a regenerative tissue. Therefore, this study aimed to isolate and characterise cells from osteophyte tissues in relation to their phenotype, differentiation potential, immuno-modulatory properties, proliferation, cellular ageing, longevity and chondrogenesis in in vitro defect model in comparison to patient matched bone marrow stromal cells (bMSC). Osteophyte derived cells were isolated from osteophyte tissue samples collected during knee replacement surgery. These cells were characterised by the expression of cell surface antigens, differentiation potential into mesenchymal lineages, growth kinetics and modulation of allo-immune responses. Multipotential stem cells were identified from all osteophyte samples namely osteophyte derived mesenchymal stem cells (oMSC). Extensively expanded cell cultures (passage 4 and 9 respectively) were used to confirm cytogenetic stability and study signs of cellular aging, telomere length and telomerase activity. Cultured cells at passage 4 were used to determine 84 pathway focused stem cell related gene expression profile. Micro mass pellets were cultured in chondrogenic differentiation media for 21 days for phenotypic and chondrogenic related gene expression. Secondly, cell pellets differentiated overnight were placed into articular cartilage defects and cultured for further 21 days in control medium and chondrogenic medium to study chondrogenesis and cell behaviour. The surface antigen expression of oMSC was consistent with that of mesenchymal stem cells, such as lacking the haematopoietic and common leukocyte markers (CD34, CD45) while expressing those related to adhesion (CD29, CD166, CD44) and stem cells (CD90, CD105, CD73). The proliferation capacity of oMSC in culture was superior to that of bMSC, and they readily differentiated into tissues of the mesenchymal lineages. oMSC also demonstrated the ability to suppress allogeneic T-cell proliferation, which was associated with the expression of tryptophan degrading enzyme indoleamine 2,3 dioxygenase (IDO). Cellular aging was more prominent in late passage bMSC than in oMSC. oMSC had longer telomere length in late passages compared with bMSC, although there was no significant difference in telomere lengths in the early passages in either cell type. Telomerase activity was detectable only in early passage oMSC and not in bMSC. In osteophyte tissues telomerase positive cells were found to be located peri vascularly and were Stro-1 positive. Eighty-four pathway-focused genes were investigated and only five genes (APC, CCND2, GJB2, NCAM and BMP2) were differentially expressed between bMSC and oMSC. Chondrogenically induced micro mass pellets of oMSC showed higher staining intensity for proteoglycans, aggrecan and collagen II. Differential expression of chondrogenic related genes showed up regulation of Aggrecan and Sox 9 in oMSC and collagen II in bMSC. The in vitro defect models of oMSC in control medium showed rounded and aggregated cells staining positively for proteoglycan and presence of some extracellular matrix. In contrast, defects with bMSC showed fragmentation and loss of cells, fibroblast-like cell morphology staining positively for proteoglycans. For defects maintained in chondrogenic medium, rounded, aggregated and proteoglycan positive cells were found in both oMSC and bMSC cultures. Extracellular matrix and cellular integration into newly formed matrix was evident only in oMSC defects. For analysis of chondrocyte hypertrophy, strong expression of type X collagen could be noticed in the pellet cultures and transplanted bMSC. In summary, this study demonstrated that osteophyte derived cells had similar properties to mesenchymal stem cells in the expression of antigen phenotype, differential potential and suppression of allo-immune response. Furthermore, when compared to bMSC, oMSC maintained a higher proliferative capacity due to a retained level of telomerase activity in vitro, which may account for the relatively longer telomeres delaying growth arrest by replicative senescence compared with bMSC. oMSC behaviour in defects supported chondrogenesis which implies that cells derived from regenerative tissue can be an alternative source of stem cells and have a potential clinical application for therapeutic stem cell based tissue regeneration.
Resumo:
Studies have examined the associations between cancers and circulating 25-hydroxyvitamin D [25(OH)D], but little is known about the impact of different laboratory practices on 25(OH)D concentrations. We examined the potential impact of delayed blood centrifuging, choice of collection tube, and type of assay on 25(OH)D concentrations. Blood samples from 20 healthy volunteers underwent alternative laboratory procedures: four centrifuging times (2, 24, 72, and 96 h after blood draw); three types of collection tubes (red top serum tube, two different plasma anticoagulant tubes containing heparin or EDTA); and two types of assays (DiaSorin radioimmunoassay [RIA] and chemiluminescence immunoassay [CLIA/LIAISON®]). Log-transformed 25(OH)D concentrations were analyzed using the generalized estimating equations (GEE) linear regression models. We found no difference in 25(OH)D concentrations by centrifuging times or type of assay. There was some indication of a difference in 25(OH)D concentrations by tube type in CLIA/LIAISON®-assayed samples, with concentrations in heparinized plasma (geometric mean, 16.1 ng ml−1) higher than those in serum (geometric mean, 15.3 ng ml−1) (p = 0.01), but the difference was significant only after substantial centrifuging delays (96 h). Our study suggests no necessity for requiring immediate processing of blood samples after collection or for the choice of a tube type or assay.
Resumo:
This article presents a survey of authorisation models and considers their ‘fitness-for-purpose’ in facilitating information sharing. Network-supported information sharing is an important technical capability that underpins collaboration in support of dynamic and unpredictable activities such as emergency response, national security, infrastructure protection, supply chain integration and emerging business models based on the concept of a ‘virtual organisation’. The article argues that present authorisation models are inflexible and poorly scalable in such dynamic environments due to their assumption that the future needs of the system can be predicted, which in turn justifies the use of persistent authorisation policies. The article outlines the motivation and requirement for a new flexible authorisation model that addresses the needs of information sharing. It proposes that a flexible and scalable authorisation model must allow an explicit specification of the objectives of the system and access decisions must be made based on a late trade-off analysis between these explicit objectives. A research agenda for the proposed Objective-based Access Control concept is presented.
Resumo:
The performance of iris recognition systems is significantly affected by the segmentation accuracy, especially in non- ideal iris images. This paper proposes an improved method to localise non-circular iris images quickly and accurately. Shrinking and expanding active contour methods are consolidated when localising inner and outer iris boundaries. First, the pupil region is roughly estimated based on histogram thresholding and morphological operations. There- after, a shrinking active contour model is used to precisely locate the inner iris boundary. Finally, the estimated inner iris boundary is used as an initial contour for an expanding active contour scheme to find the outer iris boundary. The proposed scheme is robust in finding exact the iris boundaries of non-circular and off-angle irises. In addition, occlusions of the iris images from eyelids and eyelashes are automatically excluded from the detected iris region. Experimental results on CASIA v3.0 iris databases indicate the accuracy of proposed technique.
Resumo:
Dr Gillian Hallam is project leader for the Queensland Government Agency Libraries Review. As an initial step in the project, a literature review was commissioned to guide the research activities and inform the development of options for potential future service delivery models for the Government agency libraries. The review presents an environmental scan and review of the professional and academic literature to consider a range of current perspectives on library and information services. Significant in this review is the focus on the specific issues and challenges impacting on contemporary government libraries and their staff. The review incorporates four key areas: current directions in government administration; trends in government library services; issues in contemporary special libraries; and the skills and competencies of special librarians. Rather than representing an exhaustive review, the research has primarily centred on recent journal articles, conference papers, reports and web resources. Commentary prepared by national and international library associations has also played a role informing this review, as does the relevant State and Federal government documentation and reporting.
Resumo:
The term structure of interest rates is often summarized using a handful of yield factors that capture shifts in the shape of the yield curve. In this paper, we develop a comprehensive model for volatility dynamics in the level, slope, and curvature of the yield curve that simultaneously includes level and GARCH effects along with regime shifts. We show that the level of the short rate is useful in modeling the volatility of the three yield factors and that there are significant GARCH effects present even after including a level effect. Further, we find that allowing for regime shifts in the factor volatilities dramatically improves the model’s fit and strengthens the level effect. We also show that a regime-switching model with level and GARCH effects provides the best out-of-sample forecasting performance of yield volatility. We argue that the auxiliary models often used to estimate term structure models with simulation-based estimation techniques should be consistent with the main features of the yield curve that are identified by our model.
Resumo:
This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.
Resumo:
In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.
Resumo:
The focus of this thesis is discretionary work effort, that is, work effort that is voluntary, is above and beyond what is minimally required or normally expected to avoid reprimand or dismissal, and is organisationally functional. Discretionary work effort is an important construct because it is known to affect individual performance as well as organisational efficiency and effectiveness. To optimise organisational performance and ensure their long term competitiveness and sustainability, firms need to be able to induce their employees to work at or near their peak level. To work at or near their peak level, individuals must be willing to supply discretionary work effort. Thus, managers need to understand the determinants of discretionary work effort. Nonetheless, despite many years of scholarly investigation across multiple disciplines, considerable debate still exists concerning why some individuals supply only minimal work effort whilst others expend effort well above and beyond what is minimally required of them (Le. they supply discretionary work effort). Even though it is well recognised that discretionary work effort is important for promoting organisational performance and effectiveness, many authors claim that too little is being done by managers to increase the discretionary work effort of their employees. In this research, I have adopted a multi-disciplinary approach towards investigating the role of monetary and non-monetary work environment characteristics in determining discretionary work effort. My central research questions were "What non-monetary work environment characteristics do employees perceive as perks (perquisites) and irks (irksome work environment characteristics)?" and "How do perks, irks and monetary rewards relate to an employee's level of discretionary work effort?" My research took a unique approach in addressing these research questions. By bringing together the economics and organisational behaviour (OB) literatures, I identified problems with the current definition and conceptualisations of the discretionary work effort construct. I then developed and empirically tested a more concise and theoretically-based definition and conceptualisation of this construct. In doing so, I disaggregated discretionary work effort to include three facets - time, intensity and direction - and empirically assessed if different classes of work environment characteristics have a differential pattern of relationships with these facets. This analysis involved a new application of a multi-disciplinary framework of human behaviour as a tool for classifying work environment characteristics and the facets of discretionary work effort. To test my model of discretionary work effort, I used a public sector context in which there has been limited systematic empirical research into work motivation. The program of research undertaken involved three separate but interrelated studies using mixed methods. Data on perks, irks, monetary rewards and discretionary work effort were gathered from employees in 12 organisations in the local government sector in Western Australia. Non-monetary work environment characteristics that should be associated with discretionary work effort were initially identified through a review of the literature. Then, a qualitative study explored what work behaviours public sector employees perceive as discretionary and what perks and irks were associated with high and low levels of discretionary work effort. Next, a quantitative study developed measures of these perks and irks. A Q-sorttype procedure and exploratory factor analysis were used to develop the perks and irks measures. Finally, a second quantitative study tested the relationships amongst perks, irks, monetary rewards and discretionary work effort. Confirmatory factor analysis was firstly used to confirm the factor structure of the measurement models. Correlation analysis, regression analysis and effect-size correlation analysis were used to test the hypothesised relationships in the proposed model of discretionary work effort. The findings confirmed five hypothesised non-monetary work environment characteristics as common perks and two of three hypothesised non-monetary work environment characteristics as common irks. Importantly, they showed that perks, irks and monetary rewards are differentially related to the different facets of discretionary work effort. The convergent and discriminant validities of the perks and irks constructs as well as the time, intensity and direction facets of discretionary work effort were generally confirmed by the research findings. This research advances the literature in several ways: (i) it draws on the Economics and OB literatures to redefine and reconceptualise the discretionary work effort construct to provide greater definitional clarity and a more complete conceptualisation of this important construct; (ii) it builds on prior research to create a more comprehensive set of perks and irks for which measures are developed; (iii) it develops and empirically tests a new motivational model of discretionary work effort that enhances our understanding of the nature and functioning of perks and irks and advances our ability to predict discretionary work effort; and (iv) it fills a substantial gap in the literature on public sector work motivation by revealing what work behaviours public sector employees perceive as discretionary and what work environment characteristics are associated with their supply of discretionary work effort. Importantly, by disaggregating discretionary work effort this research provides greater detail on how perks, irks and monetary rewards are related to the different facets of discretionary work effort. Thus, from a theoretical perspective this research also demonstrates the conceptual meaningfulness and empirical utility of investigating the different facets of discretionary work effort separately. From a practical perspective, identifying work environment factors that are associated with discretionary work effort enhances managers' capacity to tap this valuable resource. This research indicates that to maximise the potential of their human resources, managers need to address perks, irks and monetary rewards. It suggests three different mechanisms through which managers might influence discretionary work effort and points to the importance of training for both managers and non-managers in cultivating positive interpersonal relationships.
What are students' understandings of how digital tools contribute to learning in design disciplines?
Resumo:
Building Information Modelling (BIM) is evolving in the Construction Industry as a successor to CAD. CAD is mostly a technical tool that conforms to existing industry practices, however BIM has the capacity to revolutionise industry practice. Rather than producing representations of design intent, BIM produces an exact Virtual Prototype of any building that in an ideal situation is centrally stored and freely exchanged between the project team, facilitating collaboration and allowing experimentation in design. Exposing design students to this technology through their formal studies allows them to engage with cutting edge industry practices and to help shape the industry upon their graduation. Since this technology is relatively new to the construction industry, there are no accepted models for how to “teach” BIM effectively at university level. Developing learning models to enable students to make the most out of their learning with BIM presents significant challenges to those teaching in the field of design. To date there are also no studies of students experiences of using this technology. This research reports on the introduction of Building Information Modeling (BIM) software into a second year Bachelor of Design course. This software has the potential to change industry standards through its ability to revolutionise the work practices of those involved in large scale design projects. Students’ understandings and experiences of using the software in order to complete design projects as part of their assessment are reported here. In depth semi-structured interviews with 6 students revealed that students had views that ranged from novice to sophisticate about the software. They had variations in understanding of how the software could be used to complete course requirements, to assist with the design process and in the workplace. They had engaged in limited exploration of the collaborative potential of the software as a design tool. Their understanding of the significance of BIM for the workplace was also variable. The results indicate that students are beginning to develop an appreciation for how BIM could aid or constrain the work of designers, but that this appreciation is highly varied and likely to be dependent on the students’ previous experiences of working in a design studio environment. Their range of understandings of the significance of the technology is a reflection of their level of development as designers (they are “novice” designers). The results also indicate that there is a need for subjects in later years of the course that allow students to specialise in the area of digital design and to develop more sophisticated views of the role of technology in the design process. There is also a need to capitalise on the collaborative potential inherent in the software in order to realise its capability to streamline some aspects of the design process. As students become more sophisticated designers we should explore their understanding of the role of technology as a design tool in more depth in order to make recommendations for improvements to teaching and learning practice related to BIM and other digital design tools.
Resumo:
Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organization of learning environments and effective and efficient use of practice time. An emerging theoretical framework in motor learning, relevant to physical education, advocates a constraints-led perspective for acquisition of movement skills and game play knowledge. This framework shows how physical educators could use task, performer and environmental constraints to channel acquisition of movement skills and decision making behaviours in learners. From this viewpoint, learners generate specific movement solutions to satisfy the unique combination of constraints imposed on them, a process which can be harnessed during physical education lessons. Purpose: In this paper the aim is to provide an overview of the motor learning approach emanating from the constraints-led perspective, and examine how it can substantiate a platform for a new pedagogical framework in physical education: nonlinear pedagogy. We aim to demonstrate that it is only through theoretically valid and objective empirical work of an applied nature that a conceptually sound nonlinear pedagogy model can continue to evolve and support research in physical education. We present some important implications for designing practices in games lessons, showing how a constraints-led perspective on motor learning could assist physical educators in understanding how to structure learning experiences for learners at different stages, with specific focus on understanding the design of games teaching programmes in physical education, using exemplars from Rugby Union and Cricket. Findings: Research evidence from recent studies examining movement models demonstrates that physical education teachers need a strong understanding of sport performance so that task constraints can be manipulated so that information-movement couplings are maintained in a learning environment that is representative of real performance situations. Physical educators should also understand that movement variability may not necessarily be detrimental to learning and could be an important phenomenon prior to the acquisition of a stable and functional movement pattern. We highlight how the nonlinear pedagogical approach is student-centred and empowers individuals to become active learners via a more hands-off approach to learning. Summary: A constraints-based perspective has the potential to provide physical educators with a framework for understanding how performer, task and environmental constraints shape each individual‟s physical education. Understanding the underlying neurobiological processes present in a constraints-led perspective to skill acquisition and game play can raise awareness of physical educators that teaching is a dynamic 'art' interwoven with the 'science' of motor learning theories.
Resumo:
Research on expertise, talent identification and development has tended to be mono-disciplinary, typically adopting adopting neurogenetic deterministic or environmentalist positions, with an over-riding focus on operational issues. In this paper the validity of dualist positions on sport expertise is evaluated. It is argued that, to advance understanding of expertise and talent development, a shift towards a multi-disciplinary and integrative science focus is necessary, along with the development of a comprehensive multi-disciplinary theoretical rationale. Here we elucidate dynamical systems theory as a multi-disciplinary theoretical rationale for capturing how multiple interacting constraints can shape the development of expert performers. This approach suggests that talent development programmes should eschew the notion of common optimal performance models, emphasise the individual nature of pathways to expertise, and identify the range of interacting constraints that impinge on performance potential of individual athletes, rather than evaluating current performance on physical tests referenced to group norms.
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.