890 resultados para LEVEL SET METHODS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Starting induction motors on isolated or weak systems is a highly dynamic process that can cause motor and load damage as well as electrical network fluctuations. Mechanical damage is associated with the high starting current drawn by a ramping induction motor. In order to compensate the load increase, the voltage of the electrical system decreases. Different starting methods can be applied to the electrical system to reduce these and other starting method issues. The purpose of this thesis is to build accurate and usable simulation models that can aid the designer in making the choice of an appropriate motor starting method. The specific case addressed is the situation where a diesel-generator set is used as the electrical supplied source to the induction motor. The most commonly used starting methods equivalent models are simulated and compared to each other. The main contributions of this thesis is that motor dynamic impedance is continuously calculated and fed back to the generator model to simulate the coupling of the electrical system. The comparative analysis given by the simulations has shown reasonably similar characteristics to other comparative studies. The diesel-generator and induction motor simulations have shown good results, and can adequately demonstrate the dynamics for testing and comparing the starting methods. Further work is suggested to refine the equivalent impedance presented in this thesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study aimed to verify the influence of the transport in open or closed compartments (0 h), followed by two resting periods (1 and 3 h) for the slaughter process on the levels of cortisol as a indicative of stress level. At the slaughterhouse, blood samples were taken from 86 lambs after the transport and before slaughter for plasma cortisol analysis. The method of transport influenced in the cortisol concentration (0 h; P < 0.01). The animals transported in the closed compartment had a lower level (28.97 ng ml(-1)) than the animals transported in the open compartment (35.49 ng ml(-1)). After the resting period in the slaughterhouse. there was a decline in the plasmatic cortisol concentration, with the animals subjected to 3 h of rest presenting the lower average cortisol value (24.14 ng ml(-1); P < 0.05) than animals subjected to 1 h of rest (29.95 ng ml(-1)). It can be inferred that the lambs that remained 3 h in standby before slaughter had more time to recover from the stress of the transportation than those that waited just 1 h. Visual access to the external environment during the transport of the lambs is a stressful factor changing the level of plasmatic cortisol, and the resting period before slaughter was effective in lowering stress, reducing the plasmatic cortisol in the lambs. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Eliminadas las páginas en blanco

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis presents a possible method to calculate sea level variation using geodetic-quality Global Navigate Satellite System (GNSS) receivers. Three antennas are used: two small antennas and a choke ring one, analyzing only Global Positioning System signals. The main goal of the thesis is to test a modified configuration for antenna set up. In particular, measurements obtained tilting one antenna to face the horizon are compared to measurements obtained from antennas looking upward. The location of the experiment is a coastal environment nearby the Onsala Space Observatory in Sweden. Sea level variations are obtained using periodogram analysis of the SNR signal and compared to synthetic gauge generated from two independent tide gauges. The choke ring antenna provides poor result, with an RMS around 6 cm and a correlation coefficients of 0.89. The smaller antennas provide correlation coefficients around 0.93. The antenna pointing upward present an RMS of 4.3 cm and the one pointing the horizon an RMS of 6.7 cm. Notable variation in the statistical parameters is found when modifying the length of the interval analyzed. In particular, doubts are risen on the reliability of certain scattered data. No relation is found between the accuracy of the method and weather conditions. Possible methods to enhance the available data are investigated, and correlation coefficient above 0.97 can be obtained with small antennas when sacrificing data points. Hence, the results provide evidence of the suitability of SNR signal analysis for sea level variation in coastal environment even in the case of adverse weather conditions. In particular, tilted configurations provides comparable result with upward looking geodetic antennas. A SNR signal simulator is also tested to investigate its performance and usability. Various configuration are analyzed in combination with the periodogram procedure used to calculate the height of reflectors. Consistency between the data calculated and those received is found, and the overall accuracy of the height calculation program is found to be around 5 mm for input height below 5 m. The procedure is thus found to be suitable to analyze the data provided by the GNSS antennas at Onsala.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Small molecules affecting biological processes in plants are widely used in agricultural practice as herbicides or plant growth regulators and in basic plant sciences as probes to study the physiology of plants. Most of the compounds were identified in large screens by the agrochemical industry, as phytoactive natural products and more recently, novel phytoactive compounds originated from academic research by chemical screens performed to induce specific phenotypes of interest. The aim of the present PhD thesis is to evaluate different approaches used for the identification of the primary mode of action (MoA) of a phytoactive compound. Based on the methodologies used for MoA identification, three approaches are discerned: a phenotyping approach, an approach based on a genetic screen and a biochemical screening approach.rnFour scientific publications resulting from my work are presented as examples of how a phenotyping approach can successfully be applied to describe the plant MoA of different compounds in detail.rnI. A subgroup of cyanoacrylates has been discovered as plant growth inhibitors. A set of bioassays indicated a specific effect on cell division. Cytological investigations of the cell division process in plant cell cultures, studies of microtubule assembly with green fluorescent protein marker lines in vivo and cross resistant studies with Eleusine indica plants harbouring a mutation in alpha-tubulin, led to the description of alpha-tubulin as a target site of cyanoacrylates (Tresch et al., 2005).rnII. The MoA of the herbicide flamprop-m-methyl was not known so far. The studies described in Tresch et al. (2008) indicate a primary effect on cell division. Detailed studies unravelled a specific effect on mitotic microtubule figures, causing a block in cell division. In contrast to other inhibitors of microtubule rearrangement such as dinitroanilines, flamprop-m-methyl did not influence microtubule assembly in vitro. An influence of flamprop-m-methyl on a target within the cytoskeleton signalling network could be proposed (Tresch et al., 2008).rnIII. The herbicide endothall is a protein phosphatase inhibitor structurally related to the natural product cantharidin. Bioassay studies indicated a dominant effect on dark-growing cells that was unrelated to effects observed in the light. Cytological characterisation of the microtubule cytoskeleton in corn tissue and heterotrophic tobacco cells showed a specific effect of endothall on mitotic spindle formation and ultrastructure of the nucleus in combination with a decrease of the proliferation index. The observed effects are similar to those of other protein phosphatase inhibitors such as cantharidin and the structurally different okadaic acid. Additionally, the observed effects show similarities to knock-out lines of the TON1 pathway, a protein phosphatase-regulated signalling pathway. The data presented in Tresch et al. (2011) associate endothall’s known in vitro inhibition of protein phosphatases with in vivo-effects and suggest an interaction between endothall and the TON1 pathway.rnIV. Mefluidide as a plant growth regulator induces growth retardation and a specific phenotype indicating an inhibition of fatty acid biosynthesis. A test of the cuticle functionality suggested a defect in the biosynthesis of very-long-chain fatty acids (VLCFA) or waxes. Metabolic profiling studies showed similarities with different groups of VLCFA synthesis inhibitors. Detailed analyses of VLCFA composition in tissues of duckweed (Lemna paucicostata) indicated a specific inhibition of the known herbicide target 3 ketoacyl-CoA synthase (KCS). Inhibitor studies using a yeast expression system established for plant KCS proteins verified the potency of mefluidide as an inhibitor of plant KCS enzymes. It could be shown that the strength of inhibition varied for different KCS homologues. The Arabidopsis Cer6 protein, which induces a plant growth phenotype similar to mefluidide when knocked out, was one of the most sensitive KCS enzymes (Tresch et al., 2012).rnThe findings of my own work were combined with other publications reporting a successful identification of the MoA and primary target proteins of different compounds or compound classes.rnA revised three-tier approach for the MoA identification of phytoactive compounds is proposed. The approach consists of a 1st level aiming to address compound stability, uniformity of effects in different species, general cytotoxicity and the effect on common processes like transcription and translation. Based on these findings advanced studies can be defined to start the 2nd level of MoA characterisation, either with further phenotypic characterisation, starting a genetic screen or establishing a biochemical screen. At the 3rd level, enzyme assays or protein affinity studies should show the activity of the compound on the hypothesized target and should associate the in vitro effects with the in vivo profile of the compound.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Gaussian-2, Gaussian-3, complete basis set- (CBS-) QB3, and CBS-APNO methods have been used to calculate ΔH° and ΔG° values for neutral clusters of water, (H2O)n, where n = 2−6. The structures are similar to those determined from experiment and from previous high-level calculations. The thermodynamic calculations by the G2, G3, and CBS-APNO methods compare well against the estimated MP2(CBS) limit. The cyclic pentamer and hexamer structures release the most heat per hydrogen bond formed of any of the clusters. While the cage and prism forms of the hexamer are the lowest energy structures at very low temperatures, as temperature is increased the cyclic structure is favored. The free energies of cluster formation at different temperatures reveal interesting insights, the most striking being that the cyclic trimer, cyclic tetramer, and cyclic pentamer, like the dimer, should be detectable in the lower troposphere. We predict water dimer concentrations of 9 × 1014 molecules/cm3, water trimer concentrations of 2.6 × 1012 molecules/cm3, tetramer concentrations of approximately 5.8 × 1011 molecules/cm3, and pentamer concentrations of approximately 3.5 × 1010 molecules/cm3 in saturated air at 298 K. These results have important implications for understanding the gas-phase chemistry of the lower troposphere.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Complete basis set and Gaussian-n methods were combined with Barone and Cossi's implementation of the polarizable conductor model (CPCM) continuum solvation methods to calculate pKa values for six carboxylic acids. Four different thermodynamic cycles were considered in this work. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol, to calculate pKa values with cycle 1. The complete basis set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. Thermodynamic cycles that include an explicit water in the cycle are not accurate when the free energy of solvation of a water molecule is used, but appear to become accurate when the experimental free energy of vaporization of water is used. This apparent improvement is an artifact of the standard state used in the calculation. Geometry relaxation in solution does not improve the results when using these later cycles. The use of cycle 1 and the complete basis set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Complete Basis Set and Gaussian-n methods were combined with CPCM continuum solvation methods to calculate pKa values for six carboxylic acids. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol to calculate pKa values with Cycle 1. The Complete Basis Set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. The use of Cycle 1 and the Complete Basis Set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article refines Lipsky’s (1980) assertion that lacking resources negatively affect output performance. It uses fuzzy-set Qualitative Comparative Analysis to analyse the nuanced interplay of contextual and individual determinants of the output performance of veterinary inspectors as street-level bureaucrats in Switzerland. Moving ‘beyond Lipsky’, the study builds on recent theoretical contributions and a systematic comparison across organizational contexts. Against a widespread assumption, output performance is not all about the resources. The impact of perceived available resources hinges on caseloads, which prove to be more decisive. These contextual factors interact with individual attitudes emerging from diverse public accountabilities. The results contextualize the often-emphasized importance of worker-client interaction. In a setting where clients cannot escape the interaction, street-level bureaucrats are not primarily held accountable by them. Studies of output performance should thus sensibly consider gaps between what is being demanded of and offered to street-level bureaucrats, and the latter’s multiple embeddedness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives Social support receipt from one's partner is assumed to be beneficial for successful smoking cessation. However, support receipt can have costs. Recent research suggests that the most effective support is unnoticed by the receiver (i.e., invisible). Therefore, this study examined the association between everyday levels of dyadic invisible emotional and instrumental support, daily negative affect, and daily smoking after a self-set quit attempt in smoker–non-smoker couples. Methods Overall, 100 smokers (72.0% men, mean age M = 40.48, SD = 9.82) and their non-smoking partners completed electronic diaries from a self-set quit date on for 22 consecutive days, reporting daily invisible emotional and instrumental social support, daily negative affect, and daily smoking. Results Same-day multilevel analyses showed that at the between-person level, higher individual mean levels of invisible emotional and instrumental support were associated with less daily negative affect. In contrast to our assumption, more receipt of invisible emotional and instrumental support was related to more daily cigarettes smoked. Conclusions The findings are in line with previous results, indicating invisible support to have beneficial relations with affect. However, results emphasize the need for further prospective daily diary approaches for understanding the dynamics of invisible support on smoking cessation.