991 resultados para Non-representational methodologies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Allergic eye disease encompasses a group of hypersensitivity disorders which primarily affect the conjunctiva and its prevalence is increasing. It is estimated to affect 8% of patients attending optometric practice but is poorly managed and rarely involves ophthalmic assessment. Seasonal allergic conjunctivitis (SAC) is the most common form of allergic eye disease (90%), followed by perennial allergic conjunctivitis (PAC; 5%). Both are type 1 IgE mediated hypersensitivity reactions where mast cells play an important role in pathophysiology. The signs and symptoms are similar but SAC occurs periodically whereas PAC occurs year round. Despite being a relatively mild condition, the effects on the quality of life can be profound and therefore they demand attention. Primary management of SAC and PAC involves avoidance strategies depending on the responsible allergen(s) to prevent the hypersensitivity reaction. Cooled tear supplements and cold compresses may help bring relief. Pharmacological agents may become necessary as it is not possible to completely avoid the allergen(s). There are a wide range of anti-allergic medications available, such as mast cell stabilisers, antihistamines and dual-action agents. Severe cases refractory to conventional treatment require anti-inflammatories, immunomodulators or immunotherapy. Additional qualifications are required to gain access to these medications, but entry-level optometrists must offer advice and supportive therapy. Based on current evidence, the efficacy of anti-allergic medications appears equivocal so prescribing should relate to patient preference, dosing and cost. More studies with standardised methodologies are necessary elicit the most effective anti-allergic medications but those with dual-actions are likely to be first line agents. © 2011 British Contact Lens Association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual mental imagery is a process that draws on different cognitive abilities and is affected by the contents of mental images. Several studies have demonstrated that different brain areas subtend the mental imagery of navigational and non-navigational contents. Here, we set out to determine whether there are distinct representations for navigational and geographical images. Specifically, we used a Spatial Compatibility Task (SCT) to assess the mental representation of a familiar navigational space (the campus), a familiar geographical space (the map of Italy) and familiar objects (the clock). Twenty-one participants judged whether the vertical or the horizontal arrangement of items was correct. We found that distinct representational strategies were preferred to solve different categories on the SCT, namely, the horizontal perspective for the campus and the vertical perspective for the clock and the map of Italy. Furthermore, we found significant effects due to individual differences in the vividness of mental images and in preferences for verbal versus visual strategies, which selectively affect the contents of mental images. Our results suggest that imagining a familiar navigational space is somewhat different from imagining a familiar geographical space. © 2014 Elsevier Ireland Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ventrolateral prefrontal cortex (vlPFC) has been implicated in studies of both executive and social functions. Recent meta-analyses suggest that vlPFC plays an important but little understood role in Theory of Mind (ToM). Converging neuropsychological and functional Magnetic Resonance Imaging (fMRI) evidence suggests that this may reflect inhibition of self-perspective. The present study adapted an extensively published ToM localizer to evaluate the role of vlPFC in inhibition of self-perspective. The classic false belief, false photograph vignettes that comprise the localizer were modified to generate high and low salience of self-perspective. Using a factorial design, the present study identified a behavioural and neural cost associated with having a highly salient self-perspective that was incongruent with the representational content. Importantly, vlPFC only differentiated between high versus low salience of self-perspective when representing mental state content. No difference was identified for non-mental representation. This result suggests that different control processes are required to represent competing mental and non-mental content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most research on tax evasion has focused on the income tax. Sales tax evasion has been largely ignored and dismissed as immaterial. This paper explored the differences between income tax and sales tax evasion and demonstrated that sales tax enforcement is deserving of and requires the use of different tools to achieve compliance. Specifically, the major enforcement problem with sales tax is not evasion: it is theft perpetrated by companies that act as collection agents for the state. Companies engage in a principal-agent relationship with the state and many retain funds collected as an agent of the state for private use. As such, the act of sales tax theft bears more resemblance to embezzlement than to income tax evasion. It has long been assumed that the sales tax is nearly evasion free, and state revenue departments report voluntary compliance in a manner that perpetuates this myth. Current sales tax compliance enforcement methodologies are similar in form to income tax compliance enforcement methodologies and are based largely on trust. The primary focus is on delinquent filers with a very small percentage of businesses subject to audit. As a result, there is a very large group of noncompliant businesses who file on time and fly below the radar while stealing millions of taxpayer dollars. ^ The author utilized a variety of statistical methods with actual field data derived from operations of the Southern Region Criminal Investigations Unit of the Florida Department of Revenue to evaluate current and proposed sales tax compliance enforcement methodologies in a quasi-experimental, time series research design and to set forth a typology of sales tax evaders. This study showed that current estimates of voluntary compliance in sales tax systems are seriously and significantly overstated and that current enforcement methodologies are inadequate to identify the majority of violators and enforce compliance. Sales tax evasion is modeled using the theory of planned behavior and Cressey’s fraud triangle and it is demonstrated that proactive enforcement activities, characterized by substantial contact with non-delinquent taxpayers, results in superior ability to identify noncompliance and provides a structure through which noncompliant businesses can be rehabilitated.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Short chain fatty acids (SCFA), including propionate, are produced by the bacterial fermentation of carbohydrates in the colon. Propionate has many potential roles in health, including inhibiting cholesterol synthesis, de novo lipogenesis and increasing satiety. The profile of SCFA produced is determined by both the substrate available and the bacteria present and may be influenced by environmental conditions within the lumen of the colon. Whilst it may be beneficial to increase colonic propionate production, dietary strategies to achieve this are unproven. Adding propionate to food leads to poorer organoleptic properties, and oral propionate is absorbed in the small intestine. The optimum way to selectively increase colonic propionate would be to select fermentable carbohydrates that selectively promote propionate production. To date, few studies have undertaken a systematic assessment of the factors leading to increased colonic propionate production making the selection of propiogenic carbohydrates challenging. The aim of this thesis was to identify the best carbohydrates for selectively increasing propionate production, and to explore the factors which control propionate production. This work started with a systematic review of the literature for evidence of candidate carbohydrates, which led to a screen of ‘propiogenic’ substrates using in vitro batch fermentations and mechanistic analysis of the impact of pH, bond linkage and orientation using a range of sugars, polysaccharides and fibre sources. A new unit for SCFA production was developed to allow comparison of results from in vitro studies encompassing a range different methodologies found in the literature. The systematic review found that rhamnose yielded the highest rate and proportion of propionate production whereas, for polysaccharides, β-glucan ranked highest for rate and guar gum ranked highest for molar production, but this was not replicated across all studies. Thus, no single NDC was established as highly propiogenic. Some substrates appeared more propiogenic than others and when these were screened in vitro. Laminarin, and other β-glucans ranked highest for propionate production. Legume fibre and mycoprotein fibre were also propiogenic. A full complement of glucose disaccharides were tested to examine the role glycosidic bond orientation and position on propionate production. Of the glucose disaccharides tested, β(1-4) bonding was associated with increased proportion of propionate and α(1-1) and β(1-4) increased the rate and proportion of butyrate production. In conclusion, it appears that for fibre to affect satiety, high intakes of fibre are needed, and which a major mechanism is thought to occur via propionate. Within this thesis it was identified that rather than selecting specific fibres, increasing overall intakes of highly fermentable carbohydrates is as effective at increasing propionate production. Selecting carbohydrates with beta-bonding, particularly laminarin and other β(1-4) fermentable carbohydrates leads to marginal increases in propionate production. Compared with targeted delivery of propionate to the colon, fermentable carbohydrates examined in this thesis have lesser and variable effects on propionate production. A more complete understanding of the impact of bond configurations in polysaccharides, rather than disaccharides, may help selection or design of dietary carbohydrates which selectively promote colonic propionate production substrates for inclusion in functional foods. Overall this study has concluded that few substrates are selectively propiogenic and the evidence suggests that similar changes in propionate production may be achieved by modest changes in dietary fibre intake

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protective relaying comprehends several procedures and techniques focused on maintaining the power system working safely during and after undesired and abnormal network conditions, mostly caused by faulty events. Overcurrent relay is one of the oldest protective relays, its operation principle is straightforward: when the measured current is greater than a specified magnitude the protection trips; less variables are required from the system in comparison with other protections, causing the overcurrent relay to be the simplest and also the most difficult protection to coordinate; its simplicity is reflected in low implementation, operation, and maintenance cost. The counterpart consists in the increased tripping times offered by this kind of relays mostly before faults located far from their location; this problem can be particularly accentuated when standardized inverse-time curves are used or when only maximum faults are considered to carry out relay coordination. These limitations have caused overcurrent relay to be slowly relegated and replaced by more sophisticated protection principles, it is still widely applied in subtransmission, distribution, and industrial systems. In this work, the use of non standardized inverse-time curves, the model and implementation of optimization algorithms capable to carry out the coordination process, the use of different levels of short circuit currents, and the inclusion of distance relays to replace insensitive overcurrent ones are proposed methodologies focused on the overcurrent relay performance improvement. These techniques may transform the typical overcurrent relay into a more sophisticated one without changing its fundamental principles and advantages. Consequently a more secure and still economical alternative can be obtained, increasing its implementation area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The language connectome was in-vivo investigated using multimodal non-invasive quantitative MRI. In PPA patients (n=18) recruited by the IRCCS ISNB, Bologna, cortical thickness measures showed a predominant reduction on the left hemisphere (p<0.005) with respect to matched healthy controls (HC) (n=18), and an accuracy of 86.1% in discrimination from Alzheimer’s disease patients (n=18). The left temporal and para-hippocampal gyri significantly correlated (p<0.01) with language fluency. In PPA patients (n=31) recruited by the Northwestern University Chicago, DTI measures were longitudinally evaluated (2-years follow-up) under the supervision of Prof. M. Catani, King’s College London. Significant differences with matched HC (n=27) were found, tract-localized at baseline and widespread in the follow-up. Language assessment scores correlated with arcuate (AF) and uncinate (UF) fasciculi DTI measures. In left-ischemic stroke patients (n=16) recruited by the NatBrainLab, King’s College London, language recovery was longitudinally evaluated (6-months follow-up). Using arterial spin labelling imaging a significant correlation (p<0.01) between language recovery and cerebral blood flow asymmetry, was found in the middle cerebral artery perfusion, towards the right. In HC (n=29) recruited by the DIBINEM Functional MR Unit, University of Bologna, an along-tract algorithm was developed suitable for different tractography methods, using the Laplacian operator. A higher left superior temporal gyrus and precentral operculum AF connectivity was found (Talozzi L et al., 2018), and lateralized UF projections towards the left dorsal orbital cortex. In HC (n=50) recruited in the Human Connectome Project, a new tractography-driven approach was developed for left association fibres, using a principal component analysis. The first component discriminated cortical areas typically connected by the AF, suggesting a good discrimination of cortical areas sharing a similar connectivity pattern. The evaluation of morphological, microstructural and metabolic measures could be used as in-vivo biomarkers to monitor language impairment related to neurodegeneration or as surrogate of cognitive rehabilitation/interventional treatment efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Against a backdrop of rapidly increasing worldwide population and growing energy demand, the development of renewable energy technologies has become of primary importance in the effort to reduce greenhouse gas emissions. However, it is often technically and economically infeasible to transport discontinuous renewable electricity for long distances to the shore. Another shortcoming of non-programmable renewable power is its integration into the onshore grid without affecting the dispatching process. On the other hand, the offshore oil & gas industry is striving to reduce overall carbon footprint from onsite power generators and limiting large expenses associated to carrying electricity from remote offshore facilities. Furthermore, the increased complexity and expansion towards challenging areas of offshore hydrocarbons operations call for higher attention to safety and environmental protection issues from major accident hazards. Innovative hybrid energy systems, as Power-to-Gas (P2G), Power-to-Liquid (P2L) and Gas-to-Power (G2P) options, implemented at offshore locations, would offer the opportunity to overcome challenges of both renewable and oil & gas sectors. This study aims at the development of systematic methodologies based on proper sustainability and safety performance indicators supporting the choice of P2G, P2L and G2P hybrid energy options for offshore green projects in early design phases. An in-depth analysis of the different offshore hybrid strategies was performed. The literature reviews on existing methods proposing metrics to assess sustainability of hybrid energy systems, inherent safety of process routes in conceptual design stage and environmental protection of installations from oil and chemical accidental spills were carried out. To fill the gaps, a suite of specific decision-making methodologies was developed, based on representative multi-criteria indicators addressing technical, economic, environmental and societal aspects of alternative options. A set of five case-studies was defined, covering different offshore scenarios of concern, to provide an assessment of the effectiveness and value of the developed tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative imaging in oncology aims at developing imaging biomarkers for diagnosis and prediction of cancer aggressiveness and therapy response before any morphological change become visible. This Thesis exploits Computed Tomography perfusion (CTp) and multiparametric Magnetic Resonance Imaging (mpMRI) for investigating diverse cancer features on different organs. I developed a voxel-based image analysis methodology in CTp and extended its use to mpMRI, for performing precise and accurate analyses at single-voxel level. This is expected to improve reproducibility of measurements and cancer mechanisms’ comprehension and clinical interpretability. CTp has not entered the clinical routine yet, although its usefulness in the monitoring of cancer angiogenesis, due to different perfusion computing methods yielding unreproducible results. Instead, machine learning applications in mpMRI, useful to detect imaging features representative of cancer heterogeneity, are mostly limited to clinical research, because of results’ variability and difficult interpretability, which make clinicians not confident in clinical applications. In hepatic CTp, I investigated whether, and under what conditions, two widely adopted perfusion methods, Maximum Slope (MS) and Deconvolution (DV), could yield reproducible parameters. To this end, I developed signal processing methods to model the first pass kinetics and remove any numerical cause hampering the reproducibility. In mpMRI, I proposed a new approach to extract local first-order features, aiming at preserving spatial reference and making their interpretation easier. In CTp, I found out the cause of MS and DV non-reproducibility: MS and DV represent two different states of the system. Transport delays invalidate MS assumptions and, by correcting MS formulation, I have obtained the voxel-based equivalence of the two methods. In mpMRI, the developed predictive models allowed (i) detecting rectal cancers responding to neoadjuvant chemoradiation showing, at pre-therapy, sparse coarse subregions with altered density, and (ii) predicting clinically significant prostate cancers stemming from the disproportion between high- and low- diffusivity gland components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi si divide in due macroargomenti relativi alla preparazione della geometria per modelli MCNP. Il primo è quello degli errori geometrici che vengono generati quando avviene una conversione da formato CAD a CSG e le loro relazioni con il fenomeno delle lost particles. Il passaggio a CSG tramite software è infatti inevitabile per la costruzione di modelli complessi come quelli che vengono usati per rappresentare i componenti di ITER e può generare zone della geometria che non vengono definite in modo corretto. Tali aree causano la perdita di particelle durante la simulazione Monte Carlo, andando ad intaccare l' integrità statistica della soluzione del trasporto. Per questo motivo è molto importante ridurre questo tipo di errori il più possibile, ed in quest'ottica il lavoro svolto è stato quello di trovare metodi standardizzati per identificare tali errori ed infine stimarne le dimensioni. Se la prima parte della tesi è incentrata sui problemi derivanti dalla modellazione CSG, la seconda invece suggerisce un alternativa ad essa, che è l'uso di Mesh non Strutturate (UM), un approccio che sta alla base di CFD e FEM, ma che risulta innovativo nell'ambito di codici Monte Carlo. In particolare le UM sono state applicate ad una porzione dell' Upper Launcher (un componente di ITER) in modo da validare tale metodologia su modelli nucleari di alta complessità. L'approccio CSG tradizionale e quello con UM sono state confrontati in termini di risorse computazionali richieste, velocità, precisione e accuratezza sia a livello di risultati globali che locali. Da ciò emerge che, nonostante esistano ancora alcuni limiti all'applicazione per le UM dovuti in parte anche alla sua novità, vari vantaggi possono essere attribuiti a questo tipo di approccio, tra cui un workflow più lineare, maggiore accuratezza nei risultati locali, e soprattutto la possibilità futura di usare la stessa mesh per diversi tipi di analisi (come quelle termiche o strutturali).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis we focus on non-standard signatures from CMB polarisation, which might hint at the existence of new phenomena beyond the standard models for Cosmology and Particle physics. With the Planck ESA mission, CMB temperature anisotropies have been observed at the cosmic variance limit, but polarisation remains to be further investigated. CMB polarisation data are important not only because they contribute to provide tighter constraints of cosmological parameters but also because they allow the investigation of physical processes that would be precluded if just the CMB temperature maps were considered. We take polarisation data into account to assess the statistical significance of the anomalies currently observed only in the CMB temperature map and to constrain the Cosmic Birefringence (CB) effect, which is expected in parity-violating extensions of the standard electromagnetism. In particular, we propose a new one-dimensional estimator for the lack of power anomaly capable of taking both temperature and polarisation into account jointly. With the aim of studying the anisotropic CB we develop and perform two different and complementary methods able to evaluate the power spectrum of the CB. Finally, by employing these estimators and methodologies on Planck data we provide new constraints beyond what already known in literature. The measure of CMB polarisation represents a technological challenge and to make accurate estimates, one has to keep an exquisite control of the systematic effects. In order to investigate the impact of spurious signal in forthcoming CMB polarisation experiments, we study the interplay between half-wave plates (HWP) non-idealities and the beams. Our analysis suggests that certain HWP configurations, depending on the complexity of Galactic foregrounds and the beam models, significantly impacts the B-mode reconstruction fidelity and could limit the capabilities of next-generation CMB experiments. We provide also a first study of the impact of non-ideal HWPs on CB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis deals with three different topics: i) sulfoxonium ylides, ii) donor-acceptor cyclopropanes, and iii) desymmetrization reactions. Catalysis, and in more detail organocatalysis, is the fil rouge linking the three subjects of study. The main focus treated during this doctorate period is the reactivity of sulfoxonium ylides, and in particular stabilized sulfoxonium ylides. Special attention has been dedicated to the behavior of these particular substrates under asymmetric and non-asymmetric reaction conditions. Moreover, also similarities and differences with the related, less stable, sulfonium ylides were fully analyzed, both experimentally and from a theoretical point of view. Two different reactions were developed in full. One conducted under acidic reaction conditions and the second one exploiting the asymmetric aminocatalysis. Subsequently, the reactivity of donor-acceptor cyclopropanes was studied. After different attempts in the development of a new catalytic methodology based on these substrates, a non-conventional reactivity conducted under phase transfer catalysis was discovered and optimized. In particular, a chemodivergent reaction depending on the reaction conditions was developed. Finally, during the period spent abroad, a preliminary study of a desymmetrization reaction was carried out. The studied reaction is based on an asymmetric elimination reaction conducted under asymmetric phosphoric acid catalysis. In summary, this PhD thesis shows the versatility of different organocatalytic methodologies when applied to different reactions and substrates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of the thesis is the application of different attitude’s determination algorithms on data evaluated with MEMS sensor using a board provided by University of Bologna. MEMS sensors are a very cheap options to obtain acceleration, and angular velocity. The use of magnetometers based on Hall effect can provide further data. The disadvantage is that they have a lot of noise and drift which can affects the results. The different algorithms that have been used are: pitch and roll from accelerometer, yaw from magnetometer, attitude from gyroscope, TRIAD, QUEST, Magdwick, Mahony, Extended Kalman filter, Kalman GPS aided INS. In this work the algorithms have been rewritten to fit perfectly with the data provided from the MEMS sensor. The data collected by the board are acceleration on the three axis, angular velocity on the three axis, magnetic fields on the three axis, and latitude, longitude, and altitude from the GPS. Several tests and comparisons have been carried out installing the electric board on different vehicles operating in the air and on ground. The conclusion that can be drawn from this study is that the Magdwich filter is the best trade-off between computational capabilities required and results obtained. If attitude angles are obtained from accelerometers, gyroscopes, and magnetometer, inconsistent data are obtained for cases where high vibrations levels are noticed. On the other hand, Kalman filter based algorithms requires a high computational burden. TRIAD and QUEST algorithms doesn’t perform as well as filters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the study was to analyze the frequency of epidermal growth factor receptor (EGFR) mutations in Brazilian non-small cell lung cancer patients and to correlate these mutations with response to benefit of platinum-based chemotherapy in non-small cell lung cancer (NSCLC). Our cohort consisted of prospective patients with NSCLCs who received chemotherapy (platinum derivates plus paclitaxel) at the [UNICAMP], Brazil. EGFR exons 18-21 were analyzed in tumor-derived DNA. Fifty patients were included in the study (25 with adenocarcinoma). EGFR mutations were identified in 6/50 (12 %) NSCLCs and in 6/25 (24 %) adenocarcinomas; representing the frequency of EGFR mutations in a mostly self-reported White (82.0 %) southeastern Brazilian population of NSCLCs. Patients with NSCLCs harboring EGFR exon 19 deletions or the exon 21 L858R mutation were found to have a higher chance of response to platinum-paclitaxel (OR 9.67 [95 % CI 1.03-90.41], p = 0.047). We report the frequency of EGFR activating mutations in a typical southeastern Brazilian population with NSCLC, which are similar to that of other countries with Western European ethnicity. EGFR mutations seem to be predictive of a response to platinum-paclitaxel, and additional studies are needed to confirm or refute this relationship.