287 resultados para heterogeneous UAVs
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data.
Resumo:
BACKGROUND The engineering profession in Australia has failed to attract young women for the last decade or so despite all the effort that have gone into promoting engineering as a preferred career choice for girls. It is a missed opportunity for the profession to flourish as a heterogeneous team. Many traditional initiatives and programs have failed to make much impact or at best incremental improvement into attracting and retaining more women in the profession. The reasons why girls and young women in most parts of the world show little interest in engineering haven't changed, despite all the efforts to address them, the issue proposed here in this paper is with the perceptions of engineering in the community and the confidence to pursue it. This gender imbalance is detrimental for the engineering profession, and hence an action-based intervention strategy was devised by the Women in Engineering Qld Chapter of Engineers Australia in 2012 to change the perceptions of school girls by redesigning the engagement strategy and key messages. As a result, the “Power of Engineering Inc” (PoE) was established as a not-for-profit organisation, and is a collaborative effort between government, schools, universities, and industry. This paper examines a case study in changing the perceptions of year 9 and 10 school girls towards an engineering career. PURPOSE To evaluate and determine the effectiveness of an intervention in changing the perceptions of year 9 and 10 school girls about engineering career options, but specifically, “What were their perceptions of engineering before today and have those perceptions changed?” DESIGN/METHOD The inaugural Power of Engineering (PoE) event was held on International Women’s Day, Thursday 8 March 2012 and was attended by 131 high school female students (year 9 and 10) and their teachers. The key message of the day was “engineering gives you the power to change the world”. A questionnaire was conducted with the participating high school female students, collecting both quantitative and qualitative data. The survey instrument has not been validated. RESULTS The key to the success of the event was as a result of collaboration between all participants involved and the connection created between government, schools, universities and industry. Of the returned surveys (109 of 131), 91% of girls would now consider a career in engineering and 57% who had not considered engineering before the day would now consider a career in engineering. Data collected found significant numbers of negative and varying perceptions about engineering careers prior to the intervention. CONCLUSIONS The evidence in this research suggests that the intervention assisted in changing the perceptions of year 9 and 10 female school students towards engineering as a career option. Whether this intervention translates into actual career selection and study enrolment is to be determined. In saying this, the evidence suggests that there is a critical and urgent need for earlier interventions prior to students selecting their subjects for year 11 and 12. This intervention could also play its part in increasing the overall pool of students engaged in STEM education.
Resumo:
This paper presents a novel evolutionary computation approach to three-dimensional path planning for unmanned aerial vehicles (UAVs) with tactical and kinematic constraints. A genetic algorithm (GA) is modified and extended for path planning. Two GAs are seeded at the initial and final positions with a common objective to minimise their distance apart under given UAV constraints. This is accomplished by the synchronous optimisation of subsequent control vectors. The proposed evolutionary computation approach is called synchronous genetic algorithm (SGA). The sequence of control vectors generated by the SGA constitutes to a near-optimal path plan. The resulting path plan exhibits no discontinuity when transitioning from curve to straight trajectories. Experiments and results show that the paths generated by the SGA are within 2% of the optimal solution. Such a path planner when implemented on a hardware accelerator, such as field programmable gate array chips, can be used in the UAV as on-board replanner, as well as in ground station systems for assisting in high precision planning and modelling of mission scenarios.
Resumo:
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".
Resumo:
Several major human pathogens, including the filoviruses, paramyxoviruses, and rhabdoviruses, package their single-stranded RNA genomes within helical nucleocapsids, which bud through the plasma membrane of the infected cell to release enveloped virions. The virions are often heterogeneous in shape, which makes it difficult to study their structure and assembly mechanisms. We have applied cryo-electron tomography and sub-tomogram averaging methods to derive structures of Marburg virus, a highly pathogenic filovirus, both after release and during assembly within infected cells. The data demonstrate the potential of cryo-electron tomography methods to derive detailed structural information for intermediate steps in biological pathways within intact cells. We describe the location and arrangement of the viral proteins within the virion. We show that the N-terminal domain of the nucleoprotein contains the minimal assembly determinants for a helical nucleocapsid with variable number of proteins per turn. Lobes protruding from alternate interfaces between each nucleoprotein are formed by the C-terminal domain of the nucleoprotein, together with viral proteins VP24 and VP35. Each nucleoprotein packages six RNA bases. The nucleocapsid interacts in an unusual, flexible "Velcro-like" manner with the viral matrix protein VP40. Determination of the structures of assembly intermediates showed that the nucleocapsid has a defined orientation during transport and budding. Together the data show striking architectural homology between the nucleocapsid helix of rhabdoviruses and filoviruses, but unexpected, fundamental differences in the mechanisms by which the nucleocapsids are then assembled together with matrix proteins and initiate membrane envelopment to release infectious virions, suggesting that the viruses have evolved different solutions to these conserved assembly steps.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
Many older people have difficulties using modern consumer products due to increased product complexity both in terms of functionality and interface design. Previous research has shown that older people have more difficulty in using complex devices intuitively when compared to the younger. Furthermore, increased life expectancy and a falling birth rate have been catalysts for changes in world demographics over the past two decades. This trend also suggests a proportional increase of older people in the work-force. This realisation has led to research on the effective use of technology by older populations in an effort to engage them more productively and to assist them in leading independent lives. Ironically, not enough attention has been paid to the development of interaction design strategies that would actually enable older users to better exploit new technologies. Previous research suggests that if products are designed to reflect people's prior knowledge, they will appear intuitive to use. Since intuitive interfaces utilise domain-specific prior knowledge of users, they require minimal learning for effective interaction. However, older people are very diverse in their capabilities and domain-specific prior knowledge. In addition, ageing also slows down the process of acquiring new knowledge. Keeping these suggestions and limitations in view, the aim of this study was set to investigate possible approaches to developing interfaces that facilitate their intuitive use by older people. In this quest to develop intuitive interfaces for older people, two experiments were conducted that systematically investigated redundancy (the use of both text and icons) in interface design, complexity of interface structure (nested versus flat), and personal user factors such as cognitive abilities, perceived self-efficacy and technology anxiety. All of these factors could interfere with intuitive use. The results from the first experiment suggest that, contrary to what was hypothesised, older people (65+ years) completed the tasks on the text only based interface design faster than on the redundant interface design. The outcome of the second experiment showed that, as expected, older people took more time on a nested interface. However, they did not make significantly more errors compared with younger age groups. Contrary to what was expected, older age groups also did better under anxious conditions. The findings of this study also suggest that older age groups are more heterogeneous in their capabilities and their intuitive use of contemporary technological devices is mediated more by domain-specific technology prior knowledge and by their cognitive abilities, than chronological age. This makes it extremely difficult to develop product interfaces that are entirely intuitive to use. However, by keeping in view the cognitive limitations of older people when interfaces are developed, and using simple text-based interfaces with flat interface structure, would help them intuitively learn and use complex technological products successfully during early encounter with a product. These findings indicate that it might be more pragmatic if interfaces are designed for intuitive learning rather than for intuitive use. Based on this research and the existing literature, a model for adaptable interface design as a strategy for developing intuitively learnable product interfaces was proposed. An adaptable interface can initially use a simple text only interface to help older users to learn and successfully use the new system. Over time, this can be progressively changed to a symbols-based nested interface for more efficient and intuitive use.
Resumo:
This thesis aims to contribute to a better understanding of how serious games/games for change function as learning frameworks for transformative learning in an educational setting. This study illustrates how the meaning-making processes and learning with and through computer gameplay are highly contingent, and are significantly influenced by the uncertainties of the situational context. The study focuses on SCAPE, a simulation game that addresses urban planning and sustainability. SCAPE is based on the real-world scenario of Kelvin Grove Urban Village, an inner city redevelopment area in Brisbane, Queensland, Australia. The game is embedded within an educational program, and I thus account for the various gameplay experiences of different school classes participating in this program. The networks emerging from the interactions between students/players, educators, facilitators, the technology, the researcher, as well as the setting, result in unanticipated, controversial, and sometimes unintended gameplay experiences and outcomes. To unpack play, transformative learning and games, this study adopts an ecological approach that considers the magic circle of gameplay in its wider context. Using Actor-Network Theory as the ontological lens for inquiry, the methods for investigation include an extensive literature review, ethnographic participant observation of SCAPE, as well as student and teacher questionnaires, finishing with interviews with the designers and facilitators of SCAPE. Altogether, these methods address my research aim to better understand how the heterogeneous actors engage in the relationships in and around gameplay, and illustrate how their conflicting understandings enable, shape or constrain the (transformative) learning experience. To disentangle these complexities, my focus continuously shifts between the following modes of inquiry into the aims „h To describe and analyse the game as a designed artefact. „h To examine the gameplay experiences of players/students and account for how these experiences are constituted in the relationships of the network. „h To trace the meaning-making processes emerging from the various relations of players/students, facilitators, teachers, designers, technology, researcher, and setting, and consider how the boundaries of the respective ecology are configured and negotiated. „h To draw out the implications for the wider research area of game-based learning by using the simulation game SCAPE as an example for introducing gameplay to educational settings. Accounting in detail for five school classes, these accounts represent, each in its own right, distinct and sometimes controversial forms of engagement in gameplay. The practices and negotiations of all the assembled human and non-human actors highlight the contingent nature of gameplay and learning. In their sum, they offer distinct but by no means exhaustive examples of the various relationships that emerge from the different assemblages of human and non-human actors. This thesis, hence, illustrates that game-based learning in an educational setting is accompanied by considerable unpredictability and uncertainty. As ordinary life spills and leaks into gameplay experiences, group dynamics and the negotiations of technology, I argue that overly deterministic assertions of the game¡¦s intention, as well as a too narrowly defined understanding of the transformative learning outcome, can constrain our inquiries and hinder efforts to further elucidate and understand the evolving uncertainties around game-based learning. Instead, this thesis posits that playing and transformative learning are relational effects of the respective ecology, where all actors are networked in their (partial) enrolment in the process of translation. This study thus attempts to foreground the rich opportunities for exploring how game-based learning is assembled as a network of practices.
Resumo:
Cu/Ni/W nanolayered composites with individual layer thickness ranging from 5nm to 300nm were prepared by a magnetron sputtering system. Microstructures and strength of the nanolayered composites were investigated by using the nanoindentation method combined with theoretical analysis. Microstructure characterization revealed that the Cu/Ni/W composite consists of a typical Cu/Ni coherent interface and Cu/W and Ni/W incoherent interfaces. Cu/Ni/W composites have an ultrahigh strength and a large strengthening ability compared with bi-constituent Cu–X(X¼Ni, W, Au, Ag, Cr, Nb, etc.) nanolayered composites. Summarizing the present results and those reported in the literature, we systematically analyze the origin of the ultrahigh strength and its length scale dependence by taking into account the constituent layer properties, layer scales and heterogeneous layer/layer interface characteristics, including lattice and modulus mismatch as well as interface structure.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
We applied small-angle neutron scattering (SANS) and ultra small-angle neutron scattering (USANS) to monitor evolution of the CO2 adsorption in porous silica as a function of CO2 pressure and temperature in pores of different sizes. The range of pressures (0 < P < 345 bar) and temperatures (T=18 OC, 35 OC and 60 OC) corresponded to subcritical, near critical and supercritical conditions of bulk fluid. We observed that the adsorption behavior of CO2 is fundamentally different in large and small pores with the sizes D > 100 Å and D < 30 Å, respectively. Scattering data from large pores indicate formation of a dense adsorbed film of CO2 on pore walls with the liquid-like density (ρCO2)ads≈0.8 g/cm3. The adsorbed film coexists with unadsorbed fluid in the inner pore volume. The density of unadsorbed fluid in large pores is temperature and pressure dependent: it is initially lower than (ρCO2)ads and gradually approaches it with pressure. In small pores compressed CO2 gas completely fills the pore volume. At the lowest pressures of the order of 10 bar and T=18 OC, the fluid density in smallest pores available in the matrix with D ~ 10 Å exceeds bulk fluid density by a factor of ~ 8. As pressure increases, progressively larger pores become filled with the condensed CO2. Fluid densification is only observed in pores with sizes less than ~ 25 – 30 Å. As the density of the invading fluid reaches (ρCO2)bulk~ 0.8 g/cm3, pores of all sizes become uniformly filled with CO2 and the confinement effects disappear. At higher densities the fluid in small pores appears to follow the equation of state of bulk CO2 although there is an indication that the fluid density in the inner volume of large pores may exceed the density of the adsorbed layer. The equivalent internal pressure (Pint) in the smallest pores exceeds the external pressure (Pext) by a factor of ~ 5 for both sub- and supercritical CO2. Pint gradually approaches Pext as D → 25 – 30 Å and is independent of temperature in the studied range of 18 OC ≤ T ≤ 60 OC. The obtained results demonstrate certain similarity as well as differences between adsorption of subcritical and supercritical CO2 in disordered porous silica. High pressure small angle scattering experiments open new opportunities for in situ studies of the fluid adsorption in porous media of interest to CO2 sequestration, energy storage, and heterogeneous catalysis.
Resumo:
This paper deals with causal effect estimation strategies in highly heterogeneous empirical settings such as entrepreneurship. We argue that the clearer used of modern tools developed to deal with the estimation of causal effects in combination with our analysis of different sources of heterogeneity in entrepreneurship can lead to entrepreneurship with higher internal validity. We specifically lend support from the counterfactual logic and modern research of estimation strategies for causal effect estimation.
Resumo:
This paper is concerned with the optimal path planning and initialization interval of one or two UAVs in presence of a constant wind. The method compares previous literature results on synchronization of UAVs along convex curves, path planning and sampling in 2D and extends it to 3D. This method can be applied to observe gas/particle emissions inside a control volume during sampling loops. The flight pattern is composed of two phases: a start-up interval and a sampling interval which is represented by a semi-circular path. The methods were tested in four complex model test cases in 2D and 3D as well as one simulated real world scenario in 2D and one in 3D.
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.