959 resultados para Cellular automata models
Resumo:
Osteocyte cells are the most abundant cells in human bone tissue. Due to their unique morphology and location, osteocyte cells are thought to act as regulators in the bone remodelling process, and are believed to play an important role in astronauts’ bone mass loss after long-term space missions. There is increasing evidence showing that an osteocyte’s functions are highly affected by its morphology. However, changes in an osteocyte’s morphology under an altered gravity environment are still not well documented. Several in vitro studies have been recently conducted to investigate the morphological response of osteocyte cells to the microgravity environment, where osteocyte cells were cultured on a two-dimensional flat surface for at least 24 hours before microgravity experiments. Morphology changes of osteocyte cells in microgravity were then studied by comparing the cell area to 1g control cells. However, osteocyte cells found in vivo are with a more 3D morphology, and both cell body and dendritic processes are found sensitive to mechanical loadings. A round shape osteocyte’s cells support a less stiff cytoskeleton and are more sensitive to mechanical stimulations compared with flat cellular morphology. Thus, the relative flat and spread shape of isolated osteocytes in 2D culture may greatly hamper their sensitivity to a mechanical stimulus, and the lack of knowledge on the osteocyte’s morphological characteristics in culture may lead to subjective and noncomprehensive conclusions of how altered gravity impacts on an osteocyte’s morphology. Through this work empirical models were developed to quantitatively predicate the changes of morphology in osteocyte cell lines (MLO-Y4) in culture, and the response of osteocyte cells, which are relatively round in shape, to hyper-gravity stimulation has also been investigated. The morphology changes of MLO-Y4 cells in culture were quantified by measuring cell area and three dimensionless shape features including aspect ratio, circularity and solidity by using widely accepted image analysis software (ImageJTM). MLO-Y4 cells were cultured at low density (5×103 per well) and the changes in morphology were recorded over 10 hours. Based on the data obtained from the imaging analysis, empirical models were developed using the non-linear regression method. The developed empirical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analysing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary. The morphological response of MLO-Y4 cells with a relatively round morphology to hyper-gravity environment has been investigated using a centrifuge. After 2 hours culture, MLO-Y4 cells were exposed to 20g for 30mins. Changes in the morphology of MLO-Y4 cells are quantitatively analysed by measuring the average value of cell area and dimensionless shape factors such as aspect ratio, solidity and circularity. In this study, no significant morphology changes were detected in MLO-Y4 cells under a hyper-gravity environment (20g for 30 mins) compared with 1g control cells.
Resumo:
This presentation discusses topics and issues that connect closely with the Conference Themes and themes in the ARACY Report Card. For example, developing models of public space that are safe, welcoming and relevant to children and young people will impact on their overall wellbeing and may help to prevent many of the tensions occurring in Australia and elsewhere around the world. This area is the subject of ongoing international debate, research and policy formation, relevant to concerns in the ARACY Report Card about children and young people’s health and safety, participation, behaviours and risks and peer and family relationships.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
Mathematical models of mosquito-borne pathogen transmission originated in the early twentieth century to provide insights into how to most effectively combat malaria. The foundations of the Ross–Macdonald theory were established by 1970. Since then, there has been a growing interest in reducing the public health burden of mosquito-borne pathogens and an expanding use of models to guide their control. To assess how theory has changed to confront evolving public health challenges, we compiled a bibliography of 325 publications from 1970 through 2010 that included at least one mathematical model of mosquito-borne pathogen transmission and then used a 79-part questionnaire to classify each of 388 associated models according to its biological assumptions. As a composite measure to interpret the multidimensional results of our survey, we assigned a numerical value to each model that measured its similarity to 15 core assumptions of the Ross–Macdonald model. Although the analysis illustrated a growing acknowledgement of geographical, ecological and epidemiological complexities in modelling transmission, most models during the past 40 years closely resemble the Ross–Macdonald model. Modern theory would benefit from an expansion around the concepts of heterogeneous mosquito biting, poorly mixed mosquito-host encounters, spatial heterogeneity and temporal variation in the transmission process.
Resumo:
Lean strategies have been developed to eliminate or reduce manufacturing waste and thus improve operational efficiency in manufacturing processes. However, implementing lean strategies requires a large amount of resources and, in practice, manufacturers encounter difficulties in selecting appropriate lean strategies within their resource constraints. There is currently no systematic methodology available for selecting appropriate lean strategies within a manufacturer's resource constraints. In the lean transformation process, it is also critical to measure the current and desired leanness levels in order to clearly evaluate lean implementation efforts. Despite the fact that many lean strategies are utilized to reduce or eliminate manufacturing waste, little effort has been directed towards properly assessing the leanness of manufacturing organizations. In practice, a single or specific group of metrics (either qualitative or quantitative) will only partially measure the overall leanness. Existing leanness assessment methodologies do not offer a comprehensive evaluation method, integrating both quantitative and qualitative lean measures into a single quantitative value for measuring the overall leanness of an organization. This research aims to develop mathematical models and a systematic methodology for selecting appropriate lean strategies and evaluating the leanness levels in manufacturing organizations. Mathematical models were formulated and a methodology was developed for selecting appropriate lean strategies within manufacturers' limited amount of available resources to reduce their identified wastes. A leanness assessment model was developed by using the fuzzy concept to assess the leanness level and to recommend an optimum leanness value for a manufacturing organization. In the proposed leanness assessment model, both quantitative and qualitative input factors have been taken into account. Based on program developed in MATLAB and C#, a decision support tool (DST) was developed for decision makers to select lean strategies and evaluate the leanness value based on the proposed models and methodology hence sustain the lean implementation efforts. A case study was conducted to demonstrate the effectiveness of these proposed models and methodology. Case study results suggested that out of 10 wastes identified, the case organization (ABC Limited) is able to improve a maximum of six wastes from the selected workstation within their resource limitations. The selected wastes are: unnecessary motion, setup time, unnecessary transportation, inappropriate processing, work in process and raw material inventory and suggested lean strategies are: 5S, Just-In-Time, Kanban System, the Visual Management System (VMS), Cellular Manufacturing, Standard Work Process using method-time measurement (MTM), and Single Minute Exchange of Die (SMED). From the suggested lean strategies, the impact of 5S was demonstrated by measuring the leanness level of two different situations in ABC. After that, MTM was suggested as a standard work process for further improvement of the current leanness value. The initial status of the organization showed a leanness value of 0.12. By applying 5S, the leanness level significantly improved to reach 0.19 and the simulation of MTM as a standard work method shows the leanness value could be improved to 0.31. The optimum leanness value of ABC was calculated to be 0.64. These leanness values provided a quantitative indication of the impacts of improvement initiatives in terms of the overall leanness level to the case organization. Sensitivity analsysis and a t-test were also performed to validate the model proposed. This research advances the current knowledge base by developing mathematical models and methodologies to overcome lean strategy selection and leanness assessment problems. By selecting appropriate lean strategies, a manufacturer can better prioritize implementation efforts and resources to maximize the benefits of implementing lean strategies in their organization. The leanness index is used to evaluate an organization's current (before lean implementation) leanness state against the state after lean implementation and to establish benchmarking (the optimum leanness state). Hence, this research provides a continuous improvement tool for a lean manufacturing organization.
Resumo:
Cell-based therapy is considered a promising approach to achieving predictable periodontal regeneration. In this study, the regenerative potential of cell sheets derived from different parts of the periodontium (gingival connective tissue, alveolar bone and periodontal ligament) were investigated in an athymic rat periodontal defect model. Periodontal ligament (PDLC), alveolar bone (ABC) and gingival margin-derived cells (GMC) were obtained from human donors. The osteogenic potential of the primary cultures was demonstrated in vitro. Cell sheets supported by a calcium phosphate coated melt electrospun polycaprolactone (CaP-PCL) scaffold were transplanted to denuded root surfaces in surgically created periodontal defects, and allowed to heal for 1 and 4 weeks. The CaP-PCL scaffold alone was able to promote alveolar bone formation within the defect after 4 weeks. The addition of ABC and PDLC sheets resulted in significant periodontal attachment formation. The GMC sheets did not promote periodontal regeneration on the root surface and inhibited bone formation within the CaP-PCL scaffold. In conclusion, the combination of either PDLC or ABC sheets with a CaP-PCL scaffold could promote periodontal regeneration, but ABC sheets were not as effective as PDLC sheets in promoting new attachment formation.
Resumo:
Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Innovation can no longer rely on technology and R&D alone but must incorporate business models. Business model innovation has become a strong type of competitive advantage. As firms choose not to compete only on price, but through the delivery of a unique value proposition in order to engage with customers and to differentiate a company within a competitive market. The purpose of this paper is to explore and investigate business model design through various product and/or service deliveries, and identify common drivers that are catalysts for business model innovation. Fifty companies spanning a diverse range of criteria were chosen, to evaluate and compare commonalities and differences in the design of their business models. The analysis of these business cases uncovered commonalities of the key strategic drivers behind these innovative business models. Five Meta Models were derived from this content analysis: Customer Led, Cost Driven, Resource Led, Partnership Led and Price Led. These five key foci provide a designer with a focus from which quick prototypes of new business models are created. Implications from this research suggest there is no ‘one right’ model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage.
Resumo:
This dissertation seeks to define and classify potential forms of Nonlinear structure and explore the possibilities they afford for the creation of new musical works. It provides the first comprehensive framework for the discussion of Nonlinear structure in musical works and provides a detailed overview of the rise of nonlinearity in music during the 20th century. Nonlinear events are shown to emerge through significant parametrical discontinuity at the boundaries between regions of relatively strong internal cohesion. The dissertation situates Nonlinear structures in relation to linear structures and unstructured sonic phenomena and provides a means of evaluating Nonlinearity in a musical structure through the consideration of the degree to which the structure is integrated, contingent, compressible and determinate as a whole. It is proposed that Nonlinearity can be classified as a three dimensional space described by three continuums: the temporal continuum, encompassing sequential and multilinear forms of organization, the narrative continuum encompassing processual, game structure and developmental narrative forms and the referential continuum encompassing stylistic allusion, adaptation and quotation. The use of spectrograms of recorded musical works is proposed as a means of evaluating Nonlinearity in a musical work through the visual representation of parametrical divergence in pitch, duration, timbre and dynamic over time. Spectral and structural analysis of repertoire works is undertaken as part of an exploration of musical nonlinearity and the compositional and performative features that characterize it. The contribution of cultural, ideological, scientific and technological shifts to the emergence of Nonlinearity in music is discussed and a range of compositional factors that contributed to the emergence of musical Nonlinearity is examined. The evolution of notational innovations from the mobile score to the screen score is plotted and a novel framework for the discussion of these forms of musical transmission is proposed. A computer coordinated performative model is discussed, in which a computer synchronises screening of notational information, provides temporal coordination of the performers through click-tracks or similar methods and synchronises the audio processing and synthesized elements of the work. It is proposed that such a model constitutes a highly effective means of realizing complex Nonlinear structures. A creative folio comprising 29 original works that explore nonlinearity is presented, discussed and categorised utilising the proposed classifications. Spectrograms of these works are employed where appropriate to illustrate the instantiation of parametrically divergent substructures and examples of structural openness through multiple versioning.
Resumo:
Parallel interleaved converters are finding more applications everyday, for example they are frequently used for VRMs on PC main boards mainly to obtain better transient response. Parallel interleaved converters can have their inductances uncoupled, directly coupled or inversely coupled, all of which have different applications with associated advantages and disadvantages. Coupled systems offer more control over converter features, such as ripple currents, inductance volume and transient response. To be able to gain an intuitive understanding of which type of parallel interleaved converter, what amount of coupling, what number of levels and how much inductance should be used for different applications a simple equivalent model is needed. As all phases of an interleaved converter are supposed to be identical, the equivalent model is nothing more than a separate inductance which is common to all phases. Without utilising this simplification the design of a coupled system is quite daunting. Being able to design a coupled system involves solving and understanding the RMS currents of the input, individual phase (or cell) and output. A procedure using this equivalent model and a small amount of modulo arithmetic is detailed.
Resumo:
This thesis concerns the mathematical model of moving fluid interfaces in a Hele-Shaw cell: an experimental device in which fluid flow is studied by sandwiching the fluid between two closely separated plates. Analytic and numerical methods are developed to gain new insights into interfacial stability and bubble evolution, and the influence of different boundary effects is examined. In particular, the properties of the velocity-dependent kinetic undercooling boundary condition are analysed, with regard to the selection of only discrete possible shapes of travelling fingers of fluid, the formation of corners on the interface, and the interaction of kinetic undercooling with the better known effect of surface tension. Explicit solutions to the problem of an expanding or contracting ring of fluid are also developed.
Resumo:
This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.
Resumo:
In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modelling interactions between such species, we often make use of the mean field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean field approximation is only used in appropriate settings. In circumstances where the mean field approximation is unsuitable we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper we provide a method that overcomes many of the failures of the mean field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multi-species case, and show results specific to a two-species problem. We compare averaged discrete results to both the mean field approximation and our improved method which incorporates spatial correlations. We note that the mean field approximation fails dramatically in some cases, predicting very different behaviour from that seen upon averaging multiple realisations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behaviour in all cases, thus providing a more reliable modelling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.
Resumo:
Electricity is the cornerstone of modern life. It is essential to economic stability and growth, jobs and improved living standards. Electricity is also the fundamental ingredient for a dignified life; it is the source of such basic human requirements as cooked food, a comfortable living temperature and essential health care. For these reasons, it is unimaginable that today's economies could function without electricity and the modern energy services that it delivers. Somewhat ironically, however, the current approach to electricity generation also contributes to two of the gravest and most persistent problems threatening the livelihood of humans. These problems are anthropogenic climate change and sustained human poverty. To address these challenges, the global electricity sector must reduce its reliance on fossil fuel sources. In this context, the object of this research is twofold. Initially it is to consider the design of the Renewable Energy (Electricity) Act 2000 (Cth) (Renewable Electricity Act), which represents Australia's primary regulatory approach to increase the production of renewable sourced electricity. This analysis is conducted by reference to the regulatory models that exist in Germany and Great Britain. Within this context, this thesis then evaluates whether the Renewable Electricity Act is designed effectively to contribute to a more sustainable and dignified electricity generation sector in Australia. On the basis of the appraisal of the Renewable Electricity Act, this thesis contends that while certain aspects of the regulatory regime have merit, ultimately its design does not represent an effective and coherent regulatory approach to increase the production of renewable sourced electricity. In this regard, this thesis proposes a number of recommendations to reform the existing regime. These recommendations are not intended to provide instantaneous or simple solutions to the current regulatory regime. Instead, the purpose of these recommendations is to establish the legal foundations for an effective regulatory regime that is designed to increase the production of renewable sourced electricity in Australia in order to contribute to a more sustainable and dignified approach to electricity production.
Resumo:
The nanostructured surface of biomaterials plays an important role in improving their in vitro cellular bioactivity as well as stimulating in vivo tissue regeneration. Inspired by the mussel’s adhesive versatility, which is thought to be due to the plaque–substrate interface being rich in 3,4-dihydroxy-L-phenylalamine (DOPA) and lysine amino acids, in this study we developed a self-assembly method to prepare a uniform calcium phosphate (Ca-P)/polydopamine composite nanolayer on the surface of b-tricalcium phosphate (b-TCP) bioceramics by soaking b-TCP bioceramics in Tris–dopamine solution. It was found that the addition of dopamine, reaction temperature and reaction time are three key factors inducing the formation of a uniform Ca-P/polydopamine composite nanolayer. The formation mechanism of a Ca-P/polydopamine composite nanolayer involved two important steps: (i) the addition of dopamine to Tris–HCl solution decreases the pH value and accelerates Ca and P ionic dissolution from the crystal boundaries of b-TCP ceramics; (ii) dopamine is polymerized to form self-assembled polydopamine film and, at the same time, nanosized Ca-P particles are mineralized with the assistance of polydopamine, in which the formation of polydopamine occurs simultaneously with Ca-P mineralization (formation of nanosized microparticles composed of calcium phosphate-based materials), and finally a self-assembled Ca-P/polydopamine composite nanolayer forms on the surface of the b-TCP ceramics. Furthermore, the formed self-assembled Ca-P/polydopamine composite nanolayer significantly enhances the surface roughness and hydrophilicity of b-TCP ceramics, and stimulates the attachment, proliferation, alkaline phosphate (ALP) activity and bone-related gene expression (ALP, OCN, COL1 and Runx2) of human bone marrow stromal cells. Our results suggest that the preparation of self-assembled Ca-P/polydopamine composite nanolayers is a viable method to modify the surface of biomaterials by significantly improving their surface physicochemical properties and cellular bioactivity for bone regeneration application.
Resumo:
Introduction. The purpose of this chapter is to address the question raised in the chapter title. Specifically, how can models of motor control help us understand low back pain (LBP)? There are several classes of models that have been used in the past for studying spinal loading, stability, and risk of injury (see Reeves and Cholewicki (2003) for a review of past modeling approaches), but for the purpose of this chapter we will focus primarily on models used to assess motor control and its effect on spine behavior. This chapter consists of 4 sections. The first section discusses why a shift in modeling approaches is needed to study motor control issues. We will argue that the current approach for studying the spine system is limited and not well-suited for assessing motor control issues related to spine function and dysfunction. The second section will explore how models can be used to gain insight into how the central nervous system (CNS) controls the spine. This segues segue nicely into the next section that will address how models of motor control can be used in the diagnosis and treatment of LBP. Finally, the last section will deal with the issue of model verification and validity. This issue is important since modelling accuracy is critical for obtaining useful insight into the behavior of the system being studied. This chapter is not intended to be a critical review of the literature, but instead intended to capture some of the discussion raised during the 2009 Spinal Control Symposium, with some elaboration on certain issues. Readers interested in more details are referred to the cited publications.