522 resultados para flow modelling
Resumo:
In this study, we consider how Fractional Differential Equations (FDEs) can be used to study the travelling wave phenomena in parabolic equations. As our method is conducted under intracellular environments that are highly crowded, it was discovered that there is a simple relationship between the travelling wave speed and obstacle density.
Resumo:
Biochemical reactions underlying genetic regulation are often modelled as a continuous-time, discrete-state, Markov process, and the evolution of the associated probability density is described by the so-called chemical master equation (CME). However the CME is typically difficult to solve, since the state-space involved can be very large or even countably infinite. Recently a finite state projection method (FSP) that truncates the state-space was suggested and shown to be effective in an example of a model of the Pap-pili epigenetic switch. However in this example, both the model and the final time at which the solution was computed, were relatively small. Presented here is a Krylov FSP algorithm based on a combination of state-space truncation and inexact matrix-vector product routines. This allows larger-scale models to be studied and solutions for larger final times to be computed in a realistic execution time. Additionally the new method computes the solution at intermediate times at virtually no extra cost, since it is derived from Krylov-type methods for computing matrix exponentials. For the purpose of comparison the new algorithm is applied to the model of the Pap-pili epigenetic switch, where the original FSP was first demonstrated. Also the method is applied to a more sophisticated model of regulated transcription. Numerical results indicate that the new approach is significantly faster and extendable to larger biological models.
Resumo:
The study of venture idea characteristics and the contextual fit between venture ideas and individuals are key research goals in entrepreneurship (Davidsson, 2004). However, to date there has been limited scholarly attention given to these phenomena. Accordingly, this study aims to help fill the gap by investigating the importance of novelty and relatedness of venture ideas in entrepreneurial firms. On the premise that new venture creation is a process and that research should be focused on the early stages of the venturing process, this study primarily focuses its attention on examining how venture idea novelty and relatedness affect the performance in the venture creation process. Different types and degrees of novelty are considered here. Relatedness is shown to be based on individuals’ prior knowledge and resource endowment. Performance in the venture creation process is evaluated according to four possible outcomes: making progress, getting operational, being terminated and achieving positive cash flow. A theoretical model is developed demonstrating the relationship between these variables along with the investment of time and money. Several hypotheses are developed to be tested. Among them, it is hypothesised that novelty hinders short term performance in the venture creation process. On the other hand knowledge and resource relatedness are hypothesised to promote performance. An experimental study was required in order to understand how different types and degrees of novelty and relatedness of venture ideas affect the attractiveness of venture ideas in the eyes of experienced entrepreneurs. Thus, the empirical work in this thesis was based on two separate studies. In the first one, a conjoint analysis experiment was conducted on 32 experienced entrepreneurs in order to ascertain attitudinal preferences regarding venture idea attractiveness based on novelty, relatedness and potential financial gains. This helped to estimate utility values for different levels of different attributes of venture ideas and their relative importance in the attractiveness. The second study was a longitudinal investigation of how venture idea novelty and relatedness affect the performance in the venture creation process. The data for this study is from the Comprehensive Australian Study for Entrepreneurial Emergence (CAUSEE) project that has been established in order to explore the new venture creation process in Australia. CAUSEE collects data from a representative sample of over 30,000 households in Australia using random digit dialling (RDD) telephone interviews. From these cases, data was collected at two points in time during a 12 month period from 493 firms, who are currently involved in the start-up process. Hypotheses were tested and inferences were derived through descriptive statistics, confirmatory factor analysis and structural equation modelling. Results of study 1 indicate that venture idea characteristics have a role in the attractiveness and entrepreneurs prefer to introduce a moderate degree of novelty across all types of venture ideas concerned. Knowledge relatedness is demonstrated to be a more significant factor in attractiveness than resource relatedness. Results of study 2 show that the novelty hinders nascent venture performance. On the other hand, resource relatedness has a positive impact on performance unlike knowledge relatedness which has none. The results of these studies have important implications for potential entrepreneurs, investors, researchers, consultants etc. by developing a better understanding in the venture creation process and its success factors in terms of both theory and practice.
Resumo:
Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.
Resumo:
The purpose of the present study was to compare the effects of cold water immersion (CWI) and active recovery (ACT) on resting limb blood flow, rectal temperature and repeated cycling performance in the heat. Ten subjects completed two testing sessions separated by 1 week; each trial consisted of an initial all-out 35-min exercise bout, one of two 15-min recovery interventions (randomised: CWI or ACT), followed by a 40-min passive recovery period before repeating the 35-min exercise bout. Performance was measured as the change in total work completed during the exercise bouts. Resting limb blood flow, heart rate, rectal temperature and blood lactate were recorded throughout the testing sessions. There was a significant decline in performance after ACT (mean (SD) −1.81% (1.05%)) compared with CWI where performance remained unchanged (0.10% (0.71%)). Rectal temperature was reduced after CWI (36.8°C (1.0°C)) compared with ACT (38.3°C (0.4°C)), as was blood flow to the arms (CWI 3.64 (1.47) ml/100 ml/min; ACT 16.85 (3.57) ml/100 ml/min) and legs (CW 4.83 (2.49) ml/100 ml/min; ACT 4.83 (2.49) ml/100 ml/min). Leg blood flow at the end of the second exercise bout was not different between the active (15.25 (4.33) ml/100 ml/min) and cold trials (14.99 (4.96) ml/100 ml/min), whereas rectal temperature (CWI 38.1°C (0.3°C); ACT 38.8°C (0.2°C)) and arm blood flow (CWI 20.55 (3.78) ml/100 ml/min; ACT 23.83 (5.32) ml/100 ml/min) remained depressed until the end of the cold trial. These findings indicate that CWI is an effective intervention for maintaining repeat cycling performance in the heat and this performance benefit is associated with alterations in core temperature and limb blood flow.
Resumo:
Background: High-flow nasal cannulae (HFNC) create positive oropharyngeal airway pressure but it is unclear how their use affects lung volume. Electrical impedance tomography (EIT) allows assessment of changes in lung volume by measuring changes in lung impedance. Primary objectives were to investigate the effects of HFNC on airway pressure (Paw) and end-expiratory lung volume (EELV), and to identify any correlation between the two. Secondary objectives were to investigate the effects of HFNC on respiratory rate (RR), dyspnoea, tidal volume and oxygenation; and the interaction between body mass index (BMI) and EELV. Methods: Twenty patients prescribed HFNC post-cardiac surgery were investigated. Impedance measures, Paw, PaO2/FiO2 ratio, RR and modified Borg scores were recorded first on low flow oxygen (nasal cannula or Hudson face mask) and then on HFNC. Results: A strong and significant correlation existed between Paw and end-expiratory lung impedance (EELI) (r=0.7, p<0.001). Compared with low flow oxygen, HFNC significantly increased EELI by 25.6% (95% CI 24.3, 26.9) and Paw by 3.0 cmH2O (95% CI 2.4, 3.7). RR reduced by 3.4 breaths per minute (95% CI 1.7, 5.2) with HFNC use, tidal impedance variation increased by 10.5% (95% CI 6.1, 18.3) and PaO2/FiO2 ratio improved by 30.6 mmHg (95% CI 17.9, 43.3). HFNC improved subjective dyspnoea scoring (p=0.023). Increases in EELI were significantly influenced by BMI, with larger increases associated with higher BMIs (p<0.001). Conclusions: This study suggests that HFNC improve dyspnoea and oxygenation by increasing both EELV and tidal volume, and are most beneficial in patients with higher BMIs.
Resumo:
Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.
Resumo:
How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.
Resumo:
Popular wireless networks, such as IEEE 802.11/15/16, are not designed for real-time applications. Thus, supporting real-time quality of service (QoS) in wireless real-time control is challenging. This paper adopts the widely used IEEE 802.11, with the focus on its distributed coordination function (DCF), for soft-real-time control systems. The concept of the critical real-time traffic condition is introduced to characterize the marginal satisfaction of real-time requirements. Then, mathematical models are developed to describe the dynamics of DCF based real-time control networks with periodic traffic, a unique feature of control systems. Performance indices such as throughput and packet delay are evaluated using the developed models, particularly under the critical real-time traffic condition. Finally, the proposed modelling is applied to traffic rate control for cross-layer networked control system design.
Resumo:
In this study, numerical simulations of natural convection in an attic space subject to diurnal temperature condition on the sloping wall have been carried out. An explanation of choosing the period of periodic thermal effect has been given with help of the scaling analysis which is available in the literature. Moreover, the effects of the aspect ratio and Rayleigh number on the fluid flow and heat transfer have been discussed in details as well as the formation of a pitchfork bifurcation of the flow at the symmetric line of the enclosure.
Resumo:
A general mistrust within the contactor and subcontractor companies has identified one of the significant barriers to derive benefits from true downstream supply chain integration. Using the general theory of trust in inter-organizational relations and conducting interviews, this research discusses factors that influence development of trust and cooperation in contractor– subcontractor relationships in construction projects. System dynamics is the simulation method is selected in this theory-building effort, based on qualitative data collected from two projects of a construction company in Thailand. Performance, permeability and system based trust are found to make significant contributions toward parties’ trust level. Three strategic policies such as best value contracting, management of subcontractors as internal team and semi project partnering approach are recommended to stimulate the trust factors as well as cooperative long term relationship.
Resumo:
Goldin (2003) and McDonald, Yanchar, and Osguthorpe (2005) have called for mathematics learning theory that reconciles the chasm between ideologies, and which may advance mathematics teaching and learning practice. This paper discusses the theoretical underpinnings of a recently completed PhD study that draws upon Popper’s (1978) three-world model of knowledge as a lens through which to reconsider a variety of learning theories, including Piaget’s reflective abstraction. Based upon this consideration of theories, an alternative theoretical framework and complementary operational model was synthesised, the viability of which was demonstrated by its use to analyse the domain of early-number counting, addition and subtraction.
Resumo:
Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.
Resumo:
In natural estuaries, scalar diffusion and dispersion are driven by turbulence. In the present study, detailed turbulence measurements were conducted in a small subtropical estuary with semi-diurnal tides under neap tide conditions. Three acoustic Doppler velocimeters were installed mid-estuary at fixed locations close together. The units were sampled simultaneously and continuously at relatively high frequency for 50 h. The results illustrated the influence of tidal forcing in the small estuary, although low frequency longitudinal velocity oscillations were observed and believed to be induced by external resonance. The boundary shear stress data implied that the turbulent shear in the lower flow region was one order of magnitude larger than the boundary shear itself. The observation differed from turbulence data in a laboratory channel, but a key feature of natural estuary flow was the significant three dimensional effects associated with strong secondary currents including transverse shear events. The velocity covariances and triple correlations, as well as the backscatter intensity and covariances, were calculated for the entire field study. The covariances of the longitudinal velocity component showed some tidal trend, while the covariances of the transverse horizontal velocity component exhibited trends that reflected changes in secondary current patterns between ebb and flood tides. The triple correlation data tended to show some differences between ebb and flood tides. The acoustic backscatter intensity data were characterised by large fluctuations during the entire study, with dimensionless fluctuation intensity I0b =Ib between 0.46 and 0.54. An unusual feature of the field study was some moderate rainfall prior to and during the first part of the sampling period. Visual observations showed some surface scars and marked channels, while some mini transient fronts were observed.
Resumo:
This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling. Some of the core components of data modelling are addressed. A selection of results from the first data modelling activity implemented during the second year (2010; second grade) of a current longitudinal study are reported. Data modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Reported here are children's abilities to identify diverse and complex attributes, sort and classify data in different ways, and create and interpret models to represent their data.