696 resultados para force modelling
Resumo:
This thesis is a comparative study of the modelling of mechanical behaviours of F-actin cytoskeleton which is an important structural component in living cells. A new granular model was developed for F-actin cytoskeleton based on the concept of multiscale modelling. This framework overcomes difficulties encountered in physical modelling of cytoskeleton in conventional continuum mechanics modelling, and the computational challenges in all-atom molecular dynamics simulation. The thermostat algorithm was further modified to better predict the thermodynamic properties of F-actin cytoskeleton in modelling. This multiscale modelling framework was applied in explaining the physical mechanisms of cytoskeleton responses to external mechanical loads.
Resumo:
Vehicle speed is an important attribute for analysing the utility of a transport mode. The speed relationship between multiple modes of transport is of interest to traffic planners and operators. This paper quantifies the relationship between bus speed and average car speed by integrating Bluetooth data and Transit Signal Priority data from the urban network in Brisbane, Australia. The method proposed in this paper is the first of its kind to relate bus speed and average car speed by integrating multi-source traffic data in a corridor-based method. Three transferable regression models relating not-in-service bus, in-service bus during peak periods, and in-service bus during off-peak periods with average car speed are proposed. The models are cross-validated and the interrelationships are significant.
Resumo:
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3%, outperforming a recent HMM-based approach which obtained 71.2%.
Resumo:
Many nations are highlighting the need for a renaissance in the mathematical sciences as essential to the well-being of all citizens (e.g., Australian Academy of Science, 2006; 2010; The National Academies, 2009). Indeed, the first recommendation of The National Academies’ Rising Above the Storm (2007) was to vastly improve K–12 science and mathematics education. The subsequent report, Rising Above the Gathering Storm Two Years Later (2009), highlighted again the need to target mathematics and science from the earliest years of schooling: “It takes years or decades to build the capability to have a society that depends on science and technology . . . You need to generate the scientists and engineers, starting in elementary and middle school” (p. 9). Such pleas reflect the rapidly changing nature of problem solving and reasoning needed in today’s world, beyond the classroom. As The National Academies (2009) reported, “Today the problems are more complex than they were in the 1950s, and more global. They’ll require a new educated workforce, one that is more open, collaborative, and cross-disciplinary” (p. 19). The implications for the problem solving experiences we implement in schools are far-reaching. In this chapter, I consider problem solving and modelling in the primary school, beginning with the need to rethink the experiences we provide in the early years. I argue for a greater awareness of the learning potential of young children and the need to provide stimulating learning environments. I then focus on data modelling as a powerful means of advancing children’s statistical reasoning abilities, which they increasingly need as they navigate their data-drenched world.
Resumo:
Stations on Bus Rapid Transit (BRT) lines ordinarily control line capacity because they act as bottlenecks. At stations with passing lanes, congestion may occur when buses maneuvering into and out of the platform stopping lane interfere with bus flow, or when a queue of buses forms upstream of the station blocking inflow. We contend that, as bus inflow to the station area approaches capacity, queuing will become excessive in a manner similar to operation of a minor movement on an unsignalized intersection. This analogy was used to treat BRT station operation and to analyze the relationship between station queuing and capacity. We conducted microscopic simulation to study and analyze operating characteristics of the station under near steady state conditions through output variables of capacity, degree of saturation and queuing. In the first of two stages, a mathematical model was developed for all stopping buses potential capacity with bus to bus interference and the model was validated. Secondly, a mathematical model was developed to estimate the relationship between average queue and degree of saturation and calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time.
Resumo:
This study focuses on trying to understand why the range of experience with respect to HIV infection is so diverse, especially as regards to the latency period. The challenge is to determine what assumptions can be made about the nature of the experience of antigenic invasion and diversity that can be modelled, tested and argued plausibly. To investigate this, an agent-based approach is used to extract high-level behaviour which cannot be described analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties contributing to the individual disease experience and is included in a network which mimics the chain of lymphatic nodes. Dealing with massively multi-agent systems requires major computational efforts. However, parallelisation methods are a natural consequence and advantage of the multi-agent approach. These are implemented using the MPI library.
Resumo:
The field of epigenetics looks at changes in the chromosomal structure that affect gene expression without altering DNA sequence. A large-scale modelling project to better understand these mechanisms is gaining momentum. Early advances in genetics led to the all-genetic paradigm: phenotype (an organism's characteristics/behaviour) is determined by genotype (its genetic make-up). This was later amended and expressed by the well-known formula P = G + E, encompassing the notion that the visible characteristics of a living organism (the phenotype, P) is a combination of hereditary genetic factors (the genotype, G) and environmental factors (E). However, this method fails to explain why in diseases such as schizophrenia we still observe differences between identical twins. Furthermore, the identification of environmental factors (such as smoking and air quality for lung cancer) is relatively rare. The formula also fails to explain cell differentiation from a single fertilized cell. In the wake of early work by Waddington, more recent results have emphasized that the expression of the genotype can be altered without any change in the DNA sequence. This phenomenon has been tagged as epigenetics. To form the chromosome, DNA strands roll over nucleosomes, which are a cluster of nine proteins (histones), as detailed in Figure 1. Epigenetic mechanisms involve inherited alterations in these two structures, eg through attachment of a functional group to the amino acids (methyl, acetyl and phosphate). These 'stable alterations' arise during development and cell proliferation and persist through cell division. While information within the genetic material is not changed, instructions for its assembly and interpretation may be. Modelling this new paradigm, P = G + E + EpiG, is the object of our study.
Resumo:
The three phases of the macroscopic evolution of the HIV infection are well known, but it is still difficult to understand how the cellular-level interactions come together to create this characteristic pattern and, in particular, why there are such differences in individual responses. An 'agent-based' approach is chosen as a means of inferring high-level behaviour from a small set of interaction rules at the cellular level. Here the emphasis is on cell mobility and viral mutations.
Resumo:
This thesis investigated the complexity of busway operation with stopping and non-stopping buses using field data and microscopic simulation modelling. The proposed approach made significant recommendations to transit authorities to achieve the most practicable system capacity for existing and new busways. The empirical equations developed in this research and newly introduced analysis methods will be ideal tools for transit planners to achieve optimal reliability of busways.
Resumo:
In the finite element modelling of structural frames, external loads such as wind loads, dead loads and imposed loads usually act along the elements rather than at the nodes only. Conventionally, when an element is subjected to these general transverse element loads, they are usually converted to nodal forces acting at the ends of the elements by either lumping or consistent load approaches. In addition, it is especially important for an element subjected to the first- and second-order elastic behaviour, to which the steel structure is critically prone to; in particular the thin-walled steel structures, when the stocky element section may be generally critical to the inelastic behaviour. In this sense, the accurate first- and second-order elastic displacement solutions of element load effect along an element is vitally crucial, but cannot be simulated using neither numerical nodal nor consistent load methods alone, as long as no equilibrium condition is enforced in the finite element formulation, which can inevitably impair the structural safety of the steel structure particularly. It can be therefore regarded as a unique element load method to account for the element load nonlinearly. If accurate displacement solution is targeted for simulating the first- and second-order elastic behaviour on an element on the basis of sophisticated non-linear element stiffness formulation, the numerous prescribed stiffness matrices must indispensably be used for the plethora of specific transverse element loading patterns encountered. In order to circumvent this shortcoming, the present paper proposes a numerical technique to include the transverse element loading in the non-linear stiffness formulation without numerous prescribed stiffness matrices, and which is able to predict structural responses involving the effect of first-order element loads as well as the second-order coupling effect between the transverse load and axial force in the element. This paper shows that the principle of superposition can be applied to derive the generalized stiffness formulation for element load effect, so that the form of the stiffness matrix remains unchanged with respect to the specific loading patterns, but with only the magnitude of the loading (element load coefficients) being needed to be adjusted in the stiffness formulation, and subsequently the non-linear effect on element loadings can be commensurate by updating the magnitude of element load coefficients through the non-linear solution procedures. In principle, the element loading distribution is converted into a single loading magnitude at mid-span in order to provide the initial perturbation for triggering the member bowing effect due to its transverse element loads. This approach in turn sacrifices the effect of element loading distribution except at mid-span. Therefore, it can be foreseen that the load-deflection behaviour may not be as accurate as those at mid-span, but its discrepancy is still trivial as proved. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. Moreover, another significance of this paper is placed on shifting the nodal response (system analysis) to both nodal and element response (sophisticated element formulation). For the conventional finite element method, such as the cubic element, all accurate solutions can be only found at node. It means no accurate and reliable structural safety can be ensured within an element, and as a result, it hinders the engineering applications. The results of the paper are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple frames.
Resumo:
A mathematical model is developed for the ripening of cheese. Such models may assist predicting final cheese quality using measured initial composition. The main constituent chemical reactions are described with ordinary differential equations. Numerical solutions to the model equations are found using Matlab. Unknown parameter values have been fitted using experimental data available in the literature. The results from the numerical fitting are in good agreement with the data. Statistical analysis is performed on near infrared data provided to the MISG. However, due to the inhomogeneity and limited nature of the data, not many conclusions can be drawn from the analysis. A simple model of the potential changes in acidity of cheese is also considered. The results from this model are consistent with cheese manufacturing knowledge, in that the pH of cheddar cheese does not significantly change during ripening.
Resumo:
The purpose of this research is to assess daylight performance of buildings with climatic responsive envelopes with complex geometry that integrates shading devices in the façade. To this end two case studies are chosen due to their complex geometries and integrated daylight devices. The effect of different parameters of the daylight devices is analysed through Climate base daylight metrics.
Resumo:
Electric walking draglines are physically large and powerful machines used in the mining industry. However with the addition of suitable sensors and a controller a dragline can be considered as a numerically controlled machine or robot which can then perform parts of the operating cycle automatically. This paper presents an analysis of the electromechanical system necessary precursor to automatic control
Resumo:
Passenger flow simulations are an important tool for designing and managing airports. This thesis examines the different boarding strategies for the Boeing 777 and Airbus 380 aircraft in order to investigate their current performance and to determine minimum boarding times. The most optimal strategies have been discovered and new strategies that are more efficient are proposed. The methods presented offer reduced aircraft boarding times which plays an important role for reducing the overall aircraft Turn Time for an airline.
Resumo:
The most important aspect of modelling a geological variable, such as metal grade, is the spatial correlation. Spatial correlation describes the relationship between realisations of a geological variable sampled at different locations. Any method for spatially modelling such a variable should be capable of accurately estimating the true spatial correlation. Conventional kriged models are the most commonly used in mining for estimating grade or other variables at unsampled locations, and these models use the variogram or covariance function to model the spatial correlations in the process of estimation. However, this usage assumes the relationships of the observations of the variable of interest at nearby locations are only influenced by the vector distance between the locations. This means that these models assume linear spatial correlation of grade. In reality, the relationship with an observation of grade at a nearby location may be influenced by both distance between the locations and the value of the observations (ie non-linear spatial correlation, such as may exist for variables of interest in geometallurgy). Hence this may lead to inaccurate estimation of the ore reserve if a kriged model is used for estimating grade of unsampled locations when nonlinear spatial correlation is present. Copula-based methods, which are widely used in financial and actuarial modelling to quantify the non-linear dependence structures, may offer a solution. This method was introduced by Bárdossy and Li (2008) to geostatistical modelling to quantify the non-linear spatial dependence structure in a groundwater quality measurement network. Their copula-based spatial modelling is applied in this research paper to estimate the grade of 3D blocks. Furthermore, real-world mining data is used to validate this model. These copula-based grade estimates are compared with the results of conventional ordinary and lognormal kriging to present the reliability of this method.