926 resultados para Tensor
Resumo:
Obesity is a major challenge to human health worldwide. Little is known about the brain mechanisms that are associated with overeating and obesity in humans. In this project, multimodal neuroimaging techniques were utilized to study brain neurotransmission and anatomy in obesity. Bariatric surgery was used as an experimental method for assessing whether the possible differences between obese and non-obese individuals change following the weight loss. This could indicate whether obesity-related altered neurotransmission and cerebral atrophy are recoverable or whether they represent stable individual characteristics. Morbidly obese subjects (BMI ≥ 35 kg/m2) and non-obese control subjects (mean BMI 23 kg/m2) were studied with positron emission tomography (PET) and magnetic resonance imaging (MRI). In the PET studies, focus was put on dopaminergic and opioidergic systems, both of which are crucial in the reward processing. Brain dopamine D2 receptor (D2R) availability was measured using [11C]raclopride and µ-opioid receptor (MOR) availability using [11C]carfentanil. In the MRI studies, voxel-based morphometry (VBM) of T1-weighted MRI images was used, coupled with diffusion tensor imaging (DTI). Obese subjects underwent bariatric surgery as their standard clinical treatment during the study. Preoperatively, morbidly obese subjects had significantly lower MOR availability but unaltered D2R availability in several brain regions involved in reward processing, including striatum, insula, and thalamus. Moreover, obesity disrupted the interaction between the MOR and D2R systems in ventral striatum. Bariatric surgery and concomitant weight loss normalized MOR availability in the obese, but did not influence D2R availability in any brain region. Morbidly obese subjects had also significantly lower grey and white matter densities globally in the brain, but more focal changes were located in the areas associated with inhibitory control, reward processing, and appetite. DTI revealed also signs of axonal damage in the obese in corticospinal tracts and occipito-frontal fascicles. Surgery-induced weight loss resulted in global recovery of white matter density as well as more focal recovery of grey matter density among obese subjects. Altogether these results show that the endogenous opioid system is fundamentally linked to obesity. Lowered MOR availability is likely a consequence of obesity and may mediate maintenance of excessive energy uptake. In addition, obesity has adverse effects on brain structure. Bariatric surgery however reverses MOR dysfunction and recovers cerebral atrophy. Understanding the opioidergic contribution to overeating and obesity is critical for developing new psychological or pharmacological treatments for obesity. The actual molecular mechanisms behind the positive change in structure and neurotransmitter function still remain unclear and should be addressed in the future research.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
This paper continues the study of spectral synthesis and the topologies tau-infinity and tau-r on the ideal space of a Banach algebra, concentrating particularly on the class of Haagerup tensor products of C*-algebras. For this class, it is shown that spectral synthesis is equivalent to the Hausdorffness of tau_infinity. Under a weak extra condition, spectral synthesis is shown to be equivalent to the Hausdorffness of tau_r.
Resumo:
We classify the N = 4 supersymmetric AdS(5) backgrounds that arise as solutions of five-dimensional N = 4 gauged supergravity. We express our results in terms of the allowed embedding tensor components and identify the structure of the associated gauge groups. We show that the moduli space of these AdS vacua is of the form SU(1, m)/ (U(1) x SU(m)) and discuss our results regarding holographically dual N = 2 SCFTs and their conformal manifolds.
Resumo:
We discuss the possibility that dark matter corresponds to an oscillating scalar field coupled to the Higgs boson. We argue that the initial field amplitude should generically be of the order of the Hubble parameter during inflation, as a result of its quasi-de Sitter fluctuations. This implies that such a field may account for the present dark matter abundance for masses in the range 10^-6 - 10^-4 eV, if the tensor-to-scalar ratio is within the range of planned CMB experiments. We show that such mass values can naturally be obtained through either Planck-suppressed non-renormalizable interactions with the Higgs boson or, alternatively, through renormalizable interactions within the Randall–Sundrum scenario, where the dark matter scalar resides in the bulk of the warped extra-dimension and the Higgs is confined to the infrared brane.
Resumo:
We consider the a priori error analysis of hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form under weak assumptions on the mesh design and the local finite element spaces employed. In particular, we prove a priori hp-error bounds for linear target functionals of the solution, on (possibly) anisotropic computational meshes with anisotropic tensor-product polynomial basis functions. The theoretical results are illustrated by a numerical experiment.
Resumo:
Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes. Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.
Resumo:
Generalised refraction is a topic which has, thus far, garnered far less attention than it deserves. The purpose of this thesis is to highlight the potential that generalised refraction has to offer with regards to imaging and its application to designing new passive optical devices. Specifically in this thesis we will explore two types of gener- alised refraction which takes place across a planar interface: refraction by generalised confocal lenslet arrays (gCLAs), and refraction by ray-rotation sheets. We will show that the corresponding laws of refraction for these interfaces produce, in general, light-ray fields with non-zero curl, and as such do not have a corresponding outgoing waveform. We will then show that gCLAs perform integral, geometrical imaging, and that this enables them to be considered as approximate realisations of metric tensor interfaces. The concept of piecewise transformation optics will be introduced and we will show that it is possible to use gCLAs along with other optical elements such as lenses to design simple piecewise transformation-optics devices such as invisibility cloaks and insulation windows. Finally, we shall show that ray-rotation sheets can be interpreted as performing geometrical imaging into complex space, and that as a consequence, ray-rotation sheets and gCLAs may in fact be more closely related than first realised. We conclude with a summary of potential future projects which lead naturally from the results of this thesis.
Resumo:
We consider a natural representation of solutions for Tikhonov functional equations. This will be done by applying the theory of reproducing kernels to the approximate solutions of general bounded linear operator equations (when defined from reproducing kernel Hilbert spaces into general Hilbert spaces), by using the Hilbert-Schmidt property and tensor product of Hilbert spaces. As a concrete case, we shall consider generalized fractional functions formed by the quotient of Bergman functions by Szegö functions considered from the multiplication operators on the Szegö spaces.
Resumo:
L’utilisation de méthodes d’investigation cérébrale avancées a permis de mettre en évidence la présence d’altérations à court et à long terme à la suite d’une commotion cérébrale. Plus spécifiquement, des altérations affectant l’intégrité de la matière blanche et le métabolisme cellulaire ont récemment été révélées par l’utilisation de l’imagerie du tenseur de diffusion (DTI) et la spectroscopie par résonance magnétique (SRM), respectivement. Ces atteintes cérébrales ont été observées chez des athlètes masculins quelques jours après la blessure à la tête et demeuraient détectables lorsque les athlètes étaient à nouveau évalués six mois post-commotion. En revanche, aucune étude n’a évalué les effets neurométaboliques et microstructuraux dans la phase aigüe et chronique d’une commotion cérébrale chez les athlètes féminines, malgré le fait qu’elles présentent une susceptibilité accrue de subir ce type de blessure, ainsi qu’un nombre plus élevé de symptômes post-commotionnels et un temps de réhabilitation plus long. Ainsi, les études composant le présent ouvrage visent globalement à établir le profil d’atteintes microstructurales et neurométaboliques chez des athlètes féminines par l’utilisation du DTI et de la SRM. La première étude visait à évaluer les changements neurométaboliques au sein du corps calleux chez des joueurs et joueuses de hockey au cours d’une saison universitaire. Les athlètes ayant subi une commotion cérébrale pendant la saison ont été évalués 72 heures, 2 semaines et 2 mois après la blessure à la tête en plus des évaluations pré et post-saison. Les résultats démontrent une absence de différences entre les athlètes ayant subi une commotion cérébrale et les athlètes qui n’en ont pas subie. De plus, aucune différence entre les données pré et post-saison a été observée chez les athlètes masculins alors qu’une diminution du taux de N-acetyl aspartate (NAA) n’a été mise en évidence chez les athlètes féminines, suggérant ainsi un impact des coups d’intensité sous-clinique à la tête. La deuxième étude, qui utilisait le DTI et la SRM, a révélé des atteintes chez des athlètes féminines commotionnées asymptomatiques en moyenne 18 mois post-commotion. Plus spécifiquement, la SRM a révélé une diminution du taux de myo-inositol (mI) au sein de l’hippocampe et du cortex moteur primaire (M1) alors que le DTI a mis en évidence une augmentation de la diffusivité moyenne (DM) dans plusieurs faisceaux de matière blanche. De iii plus, une approche par région d’intérêt a mis en évidence une diminution de la fraction d’anisotropie (FA) dans la partie du corps calleux projetant vers l’aire motrice primaire. Le troisième article évaluait des athlètes ayant subi une commotion cérébrale dans les jours suivant la blessure à la tête (7-10 jours) ainsi que six mois post-commotion avec la SRM. Dans la phase aigüe, des altérations neuropsychologiques combinées à un nombre significativement plus élevé de symptômes post-commotionnels et dépressifs ont été trouvés chez les athlètes féminines commotionnées, qui se résorbaient en phase chronique. En revanche, aucune différence sur le plan neurométabolique n’a été mise en évidence entre les deux groupes dans la phase aigüe. Dans la phase chronique, les athlètes commotionnées démontraient des altérations neurométaboliques au sein du cortex préfrontal dorsolatéral (CPDL) et M1, marquées par une augmentation du taux de glutamate/glutamine (Glx). De plus, une diminution du taux de NAA entre les deux temps de mesure était présente chez les athlètes contrôles. Finalement, le quatrième article documentait les atteintes microstructurales au sein de la voie corticospinale et du corps calleux six mois suivant une commotion cérébrale. Les analyses n’ont démontré aucune différence au sein de la voie corticospinale alors que des différences ont été relevées par segmentation du corps calleux selon les projections des fibres calleuses. En effet, les athlètes commotionnées présentaient une diminution de la DM et de la diffusivité radiale (DR) au sein de la région projetant vers le cortex préfrontal, un volume moindre des fibres de matière blanche dans la région projetant vers l’aire prémotrice et l’aire motrice supplémentaire, ainsi qu’une diminution de la diffusivité axiale (DA) dans la région projetant vers l’aire pariétale et temporale. En somme, les études incluses dans le présent ouvrage ont permis d’approfondir les connaissances sur les effets métaboliques et microstructuraux des commotions cérébrales et démontrent des effets délétères persistants chez des athlètes féminines. Ces données vont de pair avec la littérature scientifique qui suggère que les commotions cérébrales n’entraînent pas seulement des symptômes temporaires.
Resumo:
Exogenous mechanical perturbations on living tissues are commonly used to investigate whether cell effectors can respond to mechanical cues. However, in most of these experiments, the applied mechanical stress and/or the biological response are described only qualitatively. We developed a quantitative pipeline based on microindentation and image analysis to investigate the impact of a controlled and prolonged compression on microtubule behaviour in the Arabidopsis shoot apical meristem, using microtubule fluorescent marker lines. We found that a compressive stress, in the order of magnitude of turgor pressure, induced apparent microtubule bundling. Importantly, that response could be reversed several hours after the release of compression. Next, we tested the contribution of microtubule severing to compression-induced bundling: microtubule bundling seemed less pronounced in the katanin mutant, in which microtubule severing is dramatically reduced. Conversely, some microtubule bundles could still be observed 16 hours after the release of compression in the spiral2 mutant, in which severing rate is instead increased. To quantify the impact of mechanical stress on anisotropy and orientation of microtubule arrays, we used the nematic tensor based FibrilTool ImageJ/Fiji plugin. To assess the degree of apparent bundling of the network, we developed several methods, some of which were borrowed from geostatistics. The final microtubule bundling response could notably be related to tissue growth velocity that was recorded by the indenter during compression. Because both input and output are quantified, this pipeline is an initial step towards correlating more precisely the cytoskeleton response to mechanical stress in living tissues.
Resumo:
Object recognition has long been a core problem in computer vision. To improve object spatial support and speed up object localization for object recognition, generating high-quality category-independent object proposals as the input for object recognition system has drawn attention recently. Given an image, we generate a limited number of high-quality and category-independent object proposals in advance and used as inputs for many computer vision tasks. We present an efficient dictionary-based model for image classification task. We further extend the work to a discriminative dictionary learning method for tensor sparse coding. In the first part, a multi-scale greedy-based object proposal generation approach is presented. Based on the multi-scale nature of objects in images, our approach is built on top of a hierarchical segmentation. We first identify the representative and diverse exemplar clusters within each scale. Object proposals are obtained by selecting a subset from the multi-scale segment pool via maximizing a submodular objective function, which consists of a weighted coverage term, a single-scale diversity term and a multi-scale reward term. The weighted coverage term forces the selected set of object proposals to be representative and compact; the single-scale diversity term encourages choosing segments from different exemplar clusters so that they will cover as many object patterns as possible; the multi-scale reward term encourages the selected proposals to be discriminative and selected from multiple layers generated by the hierarchical image segmentation. The experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012 segmentation dataset demonstrate the accuracy and efficiency of our object proposal model. Additionally, we validate our object proposals in simultaneous segmentation and detection and outperform the state-of-art performance. To classify the object in the image, we design a discriminative, structural low-rank framework for image classification. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier.
Resumo:
In the context of ƒ (R) gravity theories, we show that the apparent mass of a neutron star as seen from an observer at infinity is numerically calculable but requires careful matching, first at the star’s edge, between interior and exterior solutions, none of them being totally Schwarzschild-like but presenting instead small oscillations of the curvature scalar R; and second at large radii, where the Newtonian potential is used to identify the mass of the neutron star. We find that for the same equation of state, this mass definition is always larger than its general relativistic counterpart. We exemplify this with quadratic R^2 and Hu-Sawicki-like modifications of the standard General Relativity action. Therefore, the finding of two-solar mass neutron stars basically imposes no constraint on stable ƒ (R) theories. However, star radii are in general smaller than in General Relativity, which can give an observational handle on such classes of models at the astrophysical level. Both larger masses and smaller matter radii are due to much of the apparent effective energy residing in the outer metric for scalar-tensor theories. Finally, because the ƒ (R) neutron star masses can be much larger than General Relativity counterparts, the total energy available for radiating gravitational waves could be of order several solar masses, and thus a merger of these stars constitutes an interesting wave source.
Resumo:
In the Hydrocarbon exploration activities, the great enigma is the location of the deposits. Great efforts are undertaken in an attempt to better identify them, locate them and at the same time, enhance cost-effectiveness relationship of extraction of oil. Seismic methods are the most widely used because they are indirect, i.e., probing the subsurface layers without invading them. Seismogram is the representation of the Earth s interior and its structures through a conveniently disposed arrangement of the data obtained by seismic reflection. A major problem in this representation is the intensity and variety of present noise in the seismogram, as the surface bearing noise that contaminates the relevant signals, and may mask the desired information, brought by waves scattered in deeper regions of the geological layers. It was developed a tool to suppress these noises based on wavelet transform 1D and 2D. The Java language program makes the separation of seismic images considering the directions (horizontal, vertical, mixed or local) and bands of wavelengths that form these images, using the Daubechies Wavelets, Auto-resolution and Tensor Product of wavelet bases. Besides, it was developed the option in a single image, using the tensor product of two-dimensional wavelets or one-wavelet tensor product by identities. In the latter case, we have the wavelet decomposition in a two dimensional signal in a single direction. This decomposition has allowed to lengthen a certain direction the two-dimensional Wavelets, correcting the effects of scales by applying Auto-resolutions. In other words, it has been improved the treatment of a seismic image using 1D wavelet and 2D wavelet at different stages of Auto-resolution. It was also implemented improvements in the display of images associated with breakdowns in each Auto-resolution, facilitating the choices of images with the signals of interest for image reconstruction without noise. The program was tested with real data and the results were good
Resumo:
The objective of this research is to synthesize structural composites designed with particular areas defined with custom modulus, strength and toughness values in order to improve the overall mechanical behavior of the composite. Such composites are defined and referred to as 3D-designer composites. These composites will be formed from liquid crystalline polymers and carbon nanotubes. The fabrication process is a variation of rapid prototyping process, which is a layered, additive-manufacturing approach. Composites formed using this process can be custom designed by apt modeling methods for superior performance in advanced applications. The focus of this research is on enhancement of Young's modulus in order to make the final composite stiffer. Strength and toughness of the final composite with respect to various applications is also discussed. We have taken into consideration the mechanical properties of final composite at different fiber volume content as well as at different orientations and lengths of the fibers. The orientation of the LC monomers is supposed to be carried out using electric or magnetic fields. A computer program is modeled incorporating the Mori-Tanaka modeling scheme to generate the stiffness matrix of the final composite. The final properties are then deduced from the stiffness matrix using composite micromechanics. Eshelby's tensor, required to calculate the stiffness tensor using Mori-Tanaka method, is calculated using a numerical scheme that determines the components of the Eshelby's tensor (Gavazzi and Lagoudas 1990). The numerical integration is solved using Gaussian Quadrature scheme and is worked out using MATLAB as well. . MATLAB provides a good deal of commands and algorithms that can be used efficiently to elaborate the continuum of the formula to its extents. Graphs are plotted using different combinations of results and parameters involved in finding these results