956 resultados para dynamic response optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter describes a parallel optimization technique that incorporates a distributed load-balancing algorithm and provides an extremely fast solution to the problem of load-balancing adaptive unstructured meshes. Moreover, a parallel graph contraction technique can be employed to enhance the partition quality and the resulting strategy outperforms or matches results from existing state-of-the-art static mesh partitioning algorithms. The strategy can also be applied to static partitioning problems. Dynamic procedures have been found to be much faster than static techniques, to provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data. The method employs a new iterative optimization technique that balances the workload and attempts to minimize the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. The dynamic evolution of load has three major influences on possible partitioning techniques; cost, reuse, and parallelism. The unstructured mesh may be modified every few time-steps and so the load-balancing must have a low cost relative to that of the solution algorithm in between remeshing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estuaries are highly dynamic systems which may be modified in a climate change context. These changes can affect the biogeochemical cycles. Among the major impacts of climate change, the increasing rainfall events and sea level rise can be considered. This study aims to research the impact of those events in biogeochemical dynamics in the Tagus Estuary, which is the largest and most important estuary along the Portuguese coast. In this context a 2D biophysical model (MOHID) was implemented, validated and explored, through comparison with in-situ data. In order to study the impact of extreme rainfall events, which can be characterized by an high increase in freshwater inflow, three scenarios were set by changing the inputs from the main tributaries, Tagus and Sorraia Rivers. A realistic scenario considering one day of Tagus and Sorraia River extreme discharge, a scenario considering one day of single extreme discharge of the Tagus River and finally one considering the extreme runoff just from Sorraia River. For the mean sea level rise, two scenarios were also established. The first with the actual mean sea level value and the second considering an increase of 0.42 m. For the extreme rainfall events simulations, the results suggest that the biogeochemical characteristics of the Tagus Estuary are mainly influenced by Tagus River discharge. For sea level rise scenario, the results suggest a dilution in nutrient concentrations and an increase in Chl-a in specific areas.For both scenarios, the suggested increase in Chl-a concentration for specific estuarine areas, under the tested scenarios, can lead to events that promote an abnormal growth of phytoplankton (blooms) causing the water quality to drop and the estuary to face severe quality issues risking all the activities that depend on it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation focuses on design challenges caused by secondary impacts to printed wiring assemblies (PWAs) within hand-held electronics due to accidental drop or impact loading. The continuing increase of functionality, miniaturization and affordability has resulted in a decrease in the size and weight of handheld electronic products. As a result, PWAs have become thinner and the clearances between surrounding structures have decreased. The resulting increase in flexibility of the PWAs in combination with the reduced clearances requires new design rules to minimize and survive possible internal collisions impacts between PWAs and surrounding structures. Such collisions are being termed ‘secondary impact’ in this study. The effect of secondary impact on board-level drop reliability of printed wiring boards (PWBs) assembled with MEMS microphone components, is investigated using a combination of testing, response and stress analysis, and damage modeling. The response analysis is conducted using a combination of numerical finite element modeling and simplified analytic models for additional parametric sensitivity studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metabolism in an environment containing of 21% oxygen has a high risk of oxidative damage due to the formation of reactive oxygen species. Therefore, plants have evolved an antioxidant system consisting of metabolites and enzymes that either directly scavenge ROS or recycle the antioxidant metabolites. Ozone is a temporally dynamic molecule that is both naturally occurring as well as an environmental pollutant that is predicted to increase in concentration in the future as anthropogenic precursor emissions rise. It has been hypothesized that any elevation in ozone concentration will cause increased oxidative stress in plants and therefore enhanced subsequent antioxidant metabolism, but evidence for this response is variable. Along with increasing atmospheric ozone concentrations, atmospheric carbon dioxide concentration is also rising and is predicted to continue rising in the future. The effect of elevated carbon dioxide concentrations on antioxidant metabolism varies among different studies in the literature. Therefore, the question of how antioxidant metabolism will be affected in the most realistic future atmosphere, with increased carbon dioxide concentration and increased ozone concentration, has yet to be answered, and is the subject of my thesis research. First, in order to capture as much of the variability in the antioxidant system as possible, I developed a suite of high-throughput quantitative assays for a variety of antioxidant metabolites and enzymes. I optimized these assays for Glycine max (soybean), one of the most important food crops in the world. These assays provide accurate, rapid and high-throughput measures of both the general and specific antioxidant action of plant tissue extracts. Second, I investigated how growth at either elevated carbon dioxide concentration or chronic elevated ozone concentration altered antioxidant metabolism, and the ability of soybean to respond to an acute oxidative stress in a controlled environment study. I found that growth at chronic elevated ozone concentration increased the antioxidant capacity of leaves, but was unchanged or only slightly increased following an acute oxidative stress, suggesting that growth at chronic elevated ozone concentration primed the antioxidant system. Growth at high carbon dioxide concentration decreased the antioxidant capacity of leaves, increased the response of the existing antioxidant enzymes to an acute oxidative stress, but dampened and delayed the transcriptional response, suggesting an entirely different regulation of the antioxidant system. Third, I tested the findings from the controlled environment study in a field setting by investigating the response of the soybean antioxidant system to growth at elevated carbon dioxide concentration, chronic elevated ozone concentration and the combination of elevated carbon dioxide concentration and elevated ozone concentration. In this study, I confirmed that growth at elevated carbon dioxide concentration decreased specific components of antioxidant metabolism in the field. I also verified that increasing ozone concentration is highly correlated with increases in the metabolic and genomic components of antioxidant metabolism, regardless of carbon dioxide concentration environment, but that the response to increasing ozone concentration was dampened at elevated carbon dioxide concentration. In addition, I found evidence suggesting an up regulation of respiratory metabolism at higher ozone concentration, which would supply energy and carbon for detoxification and repair of cellular damage. These results consistently support the conclusion that growth at elevated carbon dioxide concentration decreases antioxidant metabolism while growth at elevated ozone concentration increases antioxidant metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: as exposure to psychosocial hazard at work represents a substantial risk factor for employee health in many modern occupations, being able to accurately assess how employees cope with their working environment is crucial. As the workplace is generally accepted as being a dynamic environment consideration should be given to the interaction between employees and the acute environmental characteristics of their workplace. The aim of this study was to investigate the effects of both acute demand and chronic work-related psychosocial hazard upon employees through ambulatory assessment of heart rate variability and blood pressure. Design: a within-subjects repeated measures design was used to investigate the relationship between exposure to work-related psychosocial hazard and ambulatory heart rate variability and blood pressure in a cohort of higher education employees. Additionally the effect of acute variation in perceived work-related demand was investigated. Results: two dimensions of the Management Standards were found to demonstrate an association with heart rate variability; more hazardous levels of “demand” and “relationships” were associated with decreased SDNN. Significant changes in blood pressure and indices of heart rate variability were observed with increased acute demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tomato (Lycopersicon esculentum Mill.) is the second most important vegetable crop worldwide and a rich source of hydrophilic (H) and lipophilic (L) antioxidants. The H fraction is constituted mainly by ascorbic acid and soluble phenolic compounds, while the L fraction contains carotenoids (mostly lycopene), tocopherols, sterols and lipophilic phenolics [1,2]. To obtain these antioxidants it is necessary to follow appropriate extraction methods and processing conditions. In this regard, this study aimed at determining the optimal extraction conditions for H and L antioxidants from a tomato surplus. A 5-level full factorial design with 4 factors (extraction time (I, 0-20 min), temperature (T, 60-180 •c), ethanol percentage (Et, 0-100%) and solid/liquid ratio (S/L, 5-45 g!L)) was implemented and the response surface methodology used for analysis. Extractions were carried out in a Biotage Initiator Microwave apparatus. The concentration-time response methods of crocin and P-carotene bleaching were applied (using 96-well microplates), since they are suitable in vitro assays to evaluate the antioxidant activity of H and L matrices, respectively [3]. Measurements were carried out at intervals of 3, 5 and 10 min (initiation, propagation and asymptotic phases), during a time frame of 200 min. The parameters Pm (maximum protected substrate) and V m (amount of protected substrate per g of extract) and the so called IC50 were used to quantify the response. The optimum extraction conditions were as follows: r~2.25 min, 7'=149.2 •c, Et=99.1 %and SIL=l5.0 giL for H antioxidants; and t=l5.4 min, 7'=60.0 •c, Et=33.0% and S/L~l5.0 g/L for L antioxidants. The proposed model was validated based on the high values of the adjusted coefficient of determination (R2.wi>0.91) and on the non-siguificant differences between predicted and experimental values. It was also found that the antioxidant capacity of the H fraction was much higher than the L one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oysters play an important role in estuarine and coastal marine habitats, where the majority of humans live. In these ecosystems, environmental degradation is substantial, and oysters must cope with highly dynamic and stressful environmental constraints during their lives in the intertidal zone. The availability of the genome sequence of the Pacific oyster Crassostrea gigas represents a unique opportunity for a comprehensive assessment of the signal transduction pathways that the species has developed to deal with this unique habitat. We performed an in silico analysis to identify, annotate and classify protein kinases in C. gigas, according to their kinase domain taxonomy classification, and compared with kinome already described in other animal species. The C. gigas kinome consists of 371 protein kinases, making it closely related to the sea urchin kinome, which has 353 protein kinases. The absence of gene redundancy in some groups of the C. gigas kinome may simplify functional studies of protein kinases. Through data mining of transcriptomes in C. gigas, we identified part of the kinome which may be central during development and may play a role in response to various environmental factors. Overall, this work contributes to a better understanding of key sensing pathways that may be central for adaptation to a highly dynamic marine environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A presente dissertação foi elaborada no âmbito do Mestrado em Engenharia Electrotécnica (MEE) no Instituto Superior de Engenharia do Porto (ISEP), em regime empresarial, na empresa PH Energia Lda. Tem-se verificado que, ao longo dos últimos anos, os mercados estão cada vez mais competitivos, tornando-se quase imperativo que as empresas apostem numa boa otimização dos processos produtivos. Produzir cada vez mais, mais rapidamente e com menos recursos disponíveis, ou seja, de forma eficiente, são os desafios de todas as empresas que pretendem permanecer no mercado. Neste contexto surge o tema de tese, “Gestão nos Serviços com Sistemas de Monitorização e Implementação do Smart Pricing”, cujo objetivo tem como base principal a otimização das plataformas da PH Energia numa cultura de melhoria contínua e orientação para o cliente e promover aplicação da tarifa indexada e Smart Pricing em empresas de maneira a que exista uma maior poupança. Ao longo desta dissertação, foram desenvolvidos cálculos associados à monitorização e gestão nos serviços, bem como demonstrada a viabilidade dos mesmos na aplicação de tarifasindexadas e Smart Pricing no setor empresarial e, para finalizar, a compensação que é possível obter ao deslocar o diagrama de cargas, mantendo sempre o mesmo consumo. Na elaboração deste trabalho fez-se o cruzamento de duas plataformas informáticas designadas GesEnergy e Kisense, com ajuda da empresa VPS que tem como parceria a empresa Energia Simples. Em relação ao plano indexado, foram realizados dois estudos de dois balcões do Banco Popular de Portugal de forma a explicitar quando e como deve ser aplicada a tarifa indexada, gestão da procura, bem como deve ser deslocação do consumo, de forma a abranger as horas mais vantajosas em que o preço de energia elétrica é mais baixo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the growth of a tissue construct in a perfusion bioreactor, focussing on its response to the mechanical environment. The bioreactor system is modelled as a two-dimensional channel containing a tissue construct through which a flow of culture medium is driven. We employ a multiphase formulation of the type presented by G. Lemon, J. King, H. Byrne, O. Jensen and K. Shakesheff in their study (Multiphase modelling of tissue growth using the theory of mixtures. J. Math. Biol. 52(2), 2006, 571–594) restricted to two interacting fluid phases, representing a cell population (and attendant extracellular matrix) and a culture medium, and employ the simplifying limit of large interphase viscous drag after S. Franks in her study (Mathematical Modelling of Tumour Growth and Stability. Ph.D. Thesis, University of Nottingham, UK, 2002) and S. Franks and J. King in their study Interactions between a uniformly proliferating tumour and its surrounding: Uniform material properties. Math. Med. Biol. 20, 2003, 47–89). The novel aspects of this study are: (i) the investigation of the effect of an imposed flow on the growth of the tissue construct, and (ii) the inclusion of a chanotransduction mechanism regulating the response of the cells to the local mechanical environment. Specifically, we consider the response of the cells to their local density and the culture medium pressure. As such, this study forms the first step towards a general multiphase formulation that incorporates the effect of mechanotransduction on the growth and morphology of a tissue construct. The model is analysed using analytic and numerical techniques, the results of which illustrate the potential use of the model to predict the dominant regulatory stimuli in a cell population.