592 resultados para Complex combinatorial problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many industrial processes and systems can be modelled mathematically by a set of Partial Differential Equations (PDEs). Finding a solution to such a PDF model is essential for system design, simulation, and process control purpose. However, major difficulties appear when solving PDEs with singularity. Traditional numerical methods, such as finite difference, finite element, and polynomial based orthogonal collocation, not only have limitations to fully capture the process dynamics but also demand enormous computation power due to the large number of elements or mesh points for accommodation of sharp variations. To tackle this challenging problem, wavelet based approaches and high resolution methods have been recently developed with successful applications to a fixedbed adsorption column model. Our investigation has shown that recent advances in wavelet based approaches and high resolution methods have the potential to be adopted for solving more complicated dynamic system models. This chapter will highlight the successful applications of these new methods in solving complex models of simulated-moving-bed (SMB) chromatographic processes. A SMB process is a distributed parameter system and can be mathematically described by a set of partial/ordinary differential equations and algebraic equations. These equations are highly coupled; experience wave propagations with steep front, and require significant numerical effort to solve. To demonstrate the numerical computing power of the wavelet based approaches and high resolution methods, a single column chromatographic process modelled by a Transport-Dispersive-Equilibrium linear model is investigated first. Numerical solutions from the upwind-1 finite difference, wavelet-collocation, and high resolution methods are evaluated by quantitative comparisons with the analytical solution for a range of Peclet numbers. After that, the advantages of the wavelet based approaches and high resolution methods are further demonstrated through applications to a dynamic SMB model for an enantiomers separation process. This research has revealed that for a PDE system with a low Peclet number, all existing numerical methods work well, but the upwind finite difference method consumes the most time for the same degree of accuracy of the numerical solution. The high resolution method provides an accurate numerical solution for a PDE system with a medium Peclet number. The wavelet collocation method is capable of catching up steep changes in the solution, and thus can be used for solving PDE models with high singularity. For the complex SMB system models under consideration, both the wavelet based approaches and high resolution methods are good candidates in terms of computation demand and prediction accuracy on the steep front. The high resolution methods have shown better stability in achieving steady state in the specific case studied in this Chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An often neglected but well recognised aspect of successful engineering asset management is the achievement of co-operation and collaboration between various occupational, functional and hierarchical levels present within complex technical environments. Engineering and technical contexts have been well documented for the presence of highly cohesive groups based around around functional or role orientations. However while highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Improved collaboration and co-operation between groups has been demonstrated to result in a number of positive outcomes at an individual, group and organisational level. Example outcomes include an increased capacity for problem solving, improved responsiveness and adaptation to organisational crises, higher morale and an increased ability to leverage workforce capability. However, an essential challenge for organisations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. This paper reviews the ability of Web 2.0 technologies and mobile computing devices to facilitate and encourage knowledge sharing between “silo’d” groups. Commonly available tools such as Facebook, Twitter, Blogs, Wiki’s and others will be reviewed in relation to their applicability, functionality and ease-of-use by engineering and technical personnel. The paper also documents three case examples of engineering organisations that have successfully employed Web 2.0 to achieve superior knowledge management. With a number of clear recommendations he paper is an essential starting point for any organization looking at the use of new generation technologies for achieving the significant outcomes associated with knowledge transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An often neglected but well recognised aspect of successful engineering asset management is the achievement of co-operation and collaboration between various occupational, functional and hierarchical levels present within complex technical environments. Engineering and technical contexts have been well documented for the presence of highly cohesive groups based around around functional or role orientations. However while highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Improved collaboration and co-operation between groups has been demonstrated to result in a number of positive outcomes at an individual, group and organisational level. Example outcomes include an increased capacity for problem solving, improved responsiveness and adaptation to organisational crises, higher morale and an increased ability to leverage workforce capability. However, an essential challenge for organisations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. This paper reviews the ability of Web 2.0 technologies and mobile computing devices to facilitate and encourage knowledge sharing between “silo’d” groups. Commonly available tools such as Facebook, Twitter, Blogs, Wiki’s and others will be reviewed in relation to their applicability, functionality and ease-of-use by engineering and technical personnel. The paper also documents three case examples of engineering organisations that have successfully employed Web 2.0 to achieve superior knowledge management. With a number of clear recommendations the paper is an essential starting point for any organization looking at the use of new generation technologies for achieving the significant outcomes associated with knowledge transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world’s increasing complexity, competitiveness, interconnectivity, and dependence on technology generate new challenges for nations and individuals that cannot be met by continuing education as usual (Katehi, Pearson, & Feder, 2009). With the proliferation of complex systems have come new technologies for communication, collaboration, and conceptualisation. These technologies have led to significant changes in the forms of mathematical and scientific thinking that are required beyond the classroom. Modelling, in its various forms, can develop and broaden children’s mathematical and scientific thinking beyond the standard curriculum. This paper first considers future competencies in the mathematical sciences within an increasingly complex world. Next, consideration is given to interdisciplinary problem solving and models and modelling. Examples of complex, interdisciplinary modelling activities across grades are presented, with data modelling in 1st grade, model-eliciting in 4th grade, and engineering-based modelling in 7th-9th grades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public dialogue regarding the high concentration of drug use and crime in inner city locations is frequently legitimised through visibility of drug-using populations and a perception of high crime rates. The public space known as the Brunswick Street Mall (Valley mall), located in the inner city Brisbane suburb of Fortitude Valley, has long provided the focal point for discussions regarding the problem of illicit drug use and antisocial behaviour in Brisbane. During the late 1990s a range of stakeholders in Fortitude Valley became mobilised to tackle crime and illicit drugs. In particular they wanted to dismantle popular perceptions of the area as representing the dark and unsafe side of Brisbane. The aim of this campaign was to instil a sense of safety in the area and dislodge Fortitude Valley from its reputation as a =symbolic location of danger‘. This thesis is a case study about an urban site that became contested by the diverse aims of a range of stakeholders who were invested in an urban renewal program and community safety project. This case study makes visible a number of actors that were lured from their existing roles in an indeterminable number of heterogeneous networks in order to create a community safety network. The following analysis of the community safety network emphasises some specific actors: history, ideas, technologies, materialities and displacements. The case study relies on the work of Foucault, Latour, Callon and Law to draw out the rationalities, background contingencies and the attempts to impose order and translate a number of entities into the community safety project in Fortitude Valley. The results of this research show that the community safety project is a case of ontological politics. Specifically the data indicates that both the (reality) problem of safety and the (knowledge) solution to safety were created simultaneously. This thesis explores the idea that while violence continues to occur in the Valley, evidence that community safety got done is located through mapping its displacement and eventual disappearance. As such, this thesis argues that community safety is a =collateral reality‘.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A composite SaaS (Software as a Service) is a software that is comprised of several software components and data components. The composite SaaS placement problem is to determine where each of the components should be deployed in a cloud computing environment such that the performance of the composite SaaS is optimal. From the computational point of view, the composite SaaS placement problem is a large-scale combinatorial optimization problem. Thus, an Iterative Cooperative Co-evolutionary Genetic Algorithm (ICCGA) was proposed. The ICCGA can find reasonable quality of solutions. However, its computation time is noticeably slow. Aiming at improving the computation time, we propose an unsynchronized Parallel Cooperative Co-evolutionary Genetic Algorithm (PCCGA) in this paper. Experimental results have shown that the PCCGA not only has quicker computation time, but also generates better quality of solutions than the ICCGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to analyse the complex nature of practice within Artistic research. This will be done by considering practice through the lens of Bourdieu’s conceptualisation of practice. The focus of the paper is on developing an understanding of practice-led approaches to research and how these are framed by what Coessens et al. (2009) call the artistic turn in research. The paper begins with a brief introduction to the nature of practice and then continues on to discuss the broader field of artistic research, describing the environment which has shaped its evolution and foregrounding several of its key dispositions. The paper aims to not simply describe existing methodology but to rethink what is meant by artistic research and practice-led strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient state asset management is crucial for government departments that rely on the operations of their state assets in order to fulfil their public functions, which include public service provision and others. These assets may be expensive, extensive and or, complex, and can have a major impact on the ability of governments to perform its function over extended periods. Various governments around the world have increasingly recognised the importance of an efficient state asset management laws, policies, and practices; exemplified by the surge in state asset management reform. This phenomenon is evident in Indonesia, in particular through the establishment of the Directorate General of State Assets in 2006, who was appointed as the ultimate state asset manager (of Republic of Indonesia) and the proprietor of state asset management reform. The Directorate General of State Assets too has pledged its adherence to good governance principles within its state asset management laws and policies reform. However the degree that good governance principles are conceptualised is unknown, resulting in questions of how and to what extent is good governance principles evident within Indonesia's reformed state asset management laws and policies. This study seeks to understand the level of which good governance principles are conceptualised and understood within reformed state asset management policies in Indonesia (as a case study), and identify the variables that play a role in the implementation of said reform. Although good governance improvements has been a central tenet in Indonesian government agenda, and state asset management reform has propelled in priority due to found neglect and unfavourable audit results; there is ambiguity in regards to the extent that good governance is conceptualised within the reform, how and whether this relationship is understood by state asset managers (i.e government officials), and what (and how) other variables play a supporting and/or impeding role in the reform. Using empirical data involving a sample of four Indonesian regional governments and 70 interviews; discrepancy in which good governance principles are conceptualised, the level it is conceptualised, at which stage of state asset management practice it is conceptualised, and the level it is understood by state asset managers (i.e government officials) was found. Human resource capacity and capability, the notion of 'needing more time', low legality, infancy of reform, and dysfunctional sense of stewardship are identified as specific impeding variables to state asset management reform; whilst decentralisation and regional autonomy regime, political history, and culture play a consistent undercurrent key role in good governance related reforms within Indonesia. This study offers insights to Indonesian policy makers interested in ensuring the conceptualisation and full implementation of good governance in all areas of governing, particularly within state asset management practices. Most importantly, this study identifies an asymmetry in good governance understanding, perspective, and assumptions between policy maker (i.e high level government officials) and policy implementers (i.e low level government officials); to be taken into account for future policy evolvements and/or writing. As such, this study suggests the need for a modified perspective and approach to good governance conceptualisation and implementation strategies, one that acknowledges and incorporates a nation's unique characteristics and no longer denies the double-edged sword of simplified assumptions of governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children’s collaborative problem solving with robotics programming tasks. The researchers analysed children’s interactions during a series of problem solving experiments in which Lego Mindstorms toolsets were used by teachers to create robotics design challenges among 24 students in a Year 4 Australian classroom (students aged 8.5–9.5 years). The design challenges were incrementally difficult, beginning with basic programming of straight line movement, and progressing to more complex challenges involving programming of the robots to raise Lego figures from conduit pipes using robots as pulleys with string and recycled materials. Data collection involved micro-genetic analysis of students’ speech interactions with tools, peers, and other experts, teacher interviews, and student focus group data. Coding the repeated patterns in the transcripts, the authors outline the structure of the children’s social speech in joint problem solving, demonstrating the patterns of speech and interaction that play an important role in the socialisation of the school-age child’s practical intellect.