952 resultados para Non-response model approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: In the World Health Organization (WHO) MONICA (multinational MONItoring of trends and determinants in CArdiovascular disease) Project considerable effort was made to obtain basic data on non-respondents to community based surveys of cardiovascular risk factors. The first purpose of this paper is to examine differences in socio-economic and health profiles among respondents and non-respondents. The second purpose is to investigate the effect of non-response on estimates of trends. Methods:Socio-economic and health profile between respondents and non-respondents in the WHO MONICA Project final survey were compared. The potential effect of non-response on the trend estimates between the initial survey and final survey approximately ten years later was investigated using both MONICA data and hypothetical data. Results: In most of the populations, non-respondents were more likely to be single, less well educated, and had poorer lifestyles and health profiles than respondents. As an example of the consequences, temporal trends in prevalence of daily smokers are shown to be overestimated in most populations if they were based only on data from respondents. Conclusions: The socio-economic and health profiles of respondents and non-respondents differed fairly consistently across 27 populations. Hence, the estimators of population trends based on respondent data are likely to be biased. Declining response rates therefore pose a threat to the accuracy of estimates of risk factor trends in many countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adjuvants are substances that enhance immune responses and thus improve the efficacy of vaccination. Few adjuvants are available for use in humans, and the one that is most commonly used (alum) often induces suboptimal immunity for protection against many pathogens. There is thus an obvious need to develop new and improved adjuvants. We have therefore taken an approach to adjuvant discovery that uses in silico modeling and structure-based drug-design. As proof-of-principle we chose to target the interaction of the chemokines CCL22 and CCL17 with their receptor CCR4. CCR4 was posited as an adjuvant target based on its expression on CD4(+)CD25(+) regulatory T cells (Tregs), which negatively regulate immune responses induced by dendritic cells (DC), whereas CCL17 and CCL22 are chemotactic agents produced by DC, which are crucial in promoting contact between DC and CCR4(+) T cells. Molecules identified by virtual screening and molecular docking as CCR4 antagonists were able to block CCL22- and CCL17-mediated recruitment of human Tregs and Th2 cells. Furthermore, CCR4 antagonists enhanced DC-mediated human CD4(+) T cell proliferation in an in vitro immune response model and amplified cellular and humoral immune responses in vivo in experimental models when injected in combination with either Modified Vaccinia Ankara expressing Ag85A from Mycobacterium tuberculosis (MVA85A) or recombinant hepatitis B virus surface antigen (rHBsAg) vaccines. The significant adjuvant activity observed provides good evidence supporting our hypothesis that CCR4 is a viable target for rational adjuvant design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The introduction of anti-vascular endothelial growth factor (anti-VEGF) has made significant impact on the reduction of the visual loss due to neovascular age-related macular degeneration (n-AMD). There are significant inter-individual differences in response to an anti-VEGF agent, made more complex by the availability of multiple anti-VEGF agents with different molecular configurations. The response to anti-VEGF therapy have been found to be dependent on a variety of factors including patient’s age, lesion characteristics, lesion duration, baseline visual acuity (VA) and the presence of particular genotype risk alleles. Furthermore, a proportion of eyes with n-AMD show a decline in acuity or morphology, despite therapy or require very frequent re-treatment. There is currently no consensus as to how to classify optimal response, or lack of it, with these therapies. There is, in particular, confusion over terms such as ‘responder status’ after treatment for n-AMD, ‘tachyphylaxis’ and ‘recalcitrant’ n-AMD. This document aims to provide a consensus on definition/categorisation of the response of n-AMD to anti-VEGF therapies and on the time points at which response to treatment should be determined. Primary response is best determined at 1 month following the last initiation dose, while maintained treatment (secondary) response is determined any time after the 4th visit. In a particular eye, secondary responses do not mirror and cannot be predicted from that in the primary phase. Morphological and functional responses to anti-VEGF treatments, do not necessarily correlate, and may be dissociated in an individual eye. Furthermore, there is a ceiling effect that can negate the currently used functional metrics such as >5 letters improvement when the baseline VA is good (ETDRS>70 letters). It is therefore important to use a combination of both the parameters in determining the response.The following are proposed definitions: optimal (good) response is defined as when there is resolution of fluid (intraretinal fluid; IRF, subretinal fluid; SRF and retinal thickening), and/or improvement of >5 letters, subject to the ceiling effect of good starting VA. Poor response is defined as <25% reduction from the baseline in the central retinal thickness (CRT), with persistent or new IRF, SRF or minimal or change in VA (that is, change in VA of 0+4 letters). Non-response is defined as an increase in fluid (IRF, SRF and CRT), or increasing haemorrhage compared with the baseline and/or loss of >5 letters compared with the baseline or best corrected vision subsequently. Poor or non-response to anti-VEGF may be due to clinical factors including suboptimal dosing than that required by a particular patient, increased dosing intervals, treatment initiation when disease is already at an advanced or chronic stage), cellular mechanisms, lesion type, genetic variation and potential tachyphylaxis); non-clinical factors including poor access to clinics or delayed appointments may also result in poor treatment outcomes. In eyes classified as good responders, treatment should be continued with the same agent when disease activity is present or reactivation occurs following temporary dose holding. In eyes that show partial response, treatment may be continued, although re-evaluation with further imaging may be required to exclude confounding factors. Where there is persistent, unchanging accumulated fluid following three consecutive injections at monthly intervals, treatment may be withheld temporarily, but recommenced with the same or alternative anti-VEGF if the fluid subsequently increases (lesion considered active). Poor or non-response to anti-VEGF treatments requires re-evaluation of diagnosis and if necessary switch to alternative therapies including other anti-VEGF agents and/or with photodynamic therapy (PDT). Idiopathic polypoidal choroidopathy may require treatment with PDT monotherapy or combination with anti-VEGF. A committee comprised of retinal specialists with experience of managing patients with n-AMD similar to that which developed the Royal College of Ophthalmologists Guidelines to Ranibizumab was assembled. Individual aspects of the guidelines were proposed by the committee lead (WMA) based on relevant reference to published evidence base following a search of Medline and circulated to all committee members for discussion before approval or modification. Each draft was modified according to feedback from committee members until unanimous approval was obtained in the final draft. A system for categorising the range of responsiveness of n-AMD lesions to anti-VEGF therapy is proposed. The proposal is based primarily on morphological criteria but functional criteria have been included. Recommendations have been made on when to consider discontinuation of therapy either because of success or futility. These guidelines should help clinical decision-making and may prevent over and/or undertreatment with anti-VEGF therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Undergraduate programmes on construction management and other closely related built environment disciplines are currently taught and assessed on a modular basis. This is the case in the UK and in many other countries globally. However, it can be argued that professionally oriented programmes like these are better assessed on a non-modular basis, in order to produce graduates who can apply knowledge on different subject contents in cohesion to solve complex practical scenarios in their work environments. The examples of medical programmes where students are assessed on a non-modular basis can be cited as areas where this is already being done. A preliminary study was undertaken to explore the applicability of non-modular assessment within construction management undergraduate education. A selected sample of university academics was interviewed to gather their perspectives on applicability of non-modular assessment. General acceptance was observed among the academics involved that integrating non-modular assessment is applicable and will be beneficial. All academics stated that at least some form of non-modular assessment as being currently used in their programmes. Examples where cross-modular knowledge is assessed included comprehensive/multi-disciplinary project modules and creating larger modules to amalgamate a number of related subject areas. As opposed to a complete shift from modular to non-modular, an approach where non-modular assessment is integrated and its use further expanded within the current system is therefore suggested. This is due to the potential benefits associated with this form of assessment to professionally aligned built environment programmes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasing use of the term, Strategic Human Resource Management (SHRM), reflects the recognition of the interdependencies between corporate strategy, organization and human resource management in the functioning of the firm. Dyer and Holder (1988) proposed a comprehensive Human Resource Strategic Typology consisting of three strategic types--inducement, investment and involvement. This research attempted to empirically validate their typology and also test the performance implications of the match between corporate strategy and HR strategy. Hypotheses were tested to determine the relationships between internal consistency in HRM sub-systems, match between corporate strategy and HR strategy, and firm performance. Data were collected by a mail survey of 998 senior HR executives of whom 263 returned the completed questionnaire. Financial information on 909 firms was collected from secondary sources like 10-K reports and CD-Disclosure. Profitability ratios were indexed to industry averages. Confirmatory Factor Analysis using LISREL provided support in favor of the six-factor HR measurement model; the six factors were staffing, training, compensation, appraisal, job design and corporate involvement. Support was also found for the presence of a second-order factor labeled "HR Strategic Orientation" explaining the variations among the six factors. LISREL analysis also supported the congruence hypothesis that HR Strategic Orientation significantly affects firm performance. There was a significant associative relationship between HR Strategy and Corporate Strategy. However, the contingency effects of the match between HR and Corporate strategies were not supported. Several tests were conducted to show that the survey results are not affected by non-response bias nor by mono-method bias. Implications of these findings for both researchers and practitioners are discussed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Depression in older females is a significant and growing problem. Females who experience life stressors across the life span are at higher risk for developing problems with depression than their male counterparts. The primary aim of this study was (a) to examine gender-specific differences in the correlates of depression in older primary care patients based on baseline and longitudinal analyses; and (b) to examine the longitudinal effect of biopsychosocial risk factors on depression treatment outcomes in different models of behavioral healthcare (i.e., integrated care and enhanced referral). Method: This study used a quantitative secondary data analysis with longitudinal data from the Primary Care Research in Substance Abuse and Mental Health for Elderly (PRISM-E) study. A linear mixed model approach to hierarchical linear modeling was used for analysis using baseline assessment, and follow-up from three-month and six-month. Results: For participants diagnosed with major depressive disorder female gender was associated with increased depression severity at six-month compared to males at six-month. Further, the interaction between gender and life stressors found that females who reported loss of family and friends, family issues, money issues, medical illness was related to higher depression severity compared to males whereas lack of activities was related to lower depression severity among females compared to males. Conclusion: These findings suggest that gender moderated the relationship between specific life stressors and depression severity similar to how a protective factor can impact a person's response to a problem and reduce the negative impact of a risk factor on a problem outcome. Therefore, life stressors may be a reliable predictor of depression for both females and males in either behavioral health treatment model. This study concluded that life stressors influence males basic comfort, stability, and survival whereas life stressors influence females' development, personal growth, and happiness; therefore, life stressors may be a useful component to include in gender-based screening and assessment tools for depression. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal education, understood by the gradual process that occurs at school, aims at learning and systematic knowledge is of great interest to society as it benefits its individuals and leads to many positive effects, such as increased productivity and welfare (Johnes, Johnes, 2007). Understanding what influences the educational outcome is as important as the result itself, because lets you manage these variables in order to obtain a better student performance. This work uses the data envelopment analysis (DEA) to compare the efficiency of Rio Grande do Norte schools. In this nonparametric method, an efficiency frontier was construct from the best schools that use the inputs set to generate educational products. Therefore, the data used were obtain by Test Brazil and year 2011 School Census to state and municipal schools of Rio Grande do Norte. Some of the variables considered as inputs and outputs have been obtain directly these bases - the other two were prepared, using the Item Response Theory (IRT) - they are the socioeconomic and school infrastructure indices. As a first step, we compared several DEA models, with changes of input variables. Then was chose the non-discretionary model for which was deep the analysis of results. The results showed that only seven schools were efficient in the 5th and 9th grades simultaneously; there were no significant differences between the efficiency of municipal and state schools; and there were no differences between large and small schools. Analyzing the municipalities, Mossoró excelled in both years with the highest proportion of efficient schools. Finally, the study suggests that using the projections provided by the DEA method, the most inefficient schools would be able to achieve the goal IDEB in 2011, in other words, it is possible to improve the education of significant state taking the efficient schools as a basis for too much.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate by means of Monte Carlo simulation and finite-size scaling analysis the critical properties of the three dimensional O (5) non-linear σ model and of the antiferromagnetic RP^(2) model, both of them regularized on a lattice. High accuracy estimates are obtained for the critical exponents, universal dimensionless quantities and critical couplings. It is concluded that both models belong to the same universality class, provided that rather non-standard identifications are made for the momentum-space propagator of the RP^(2) model. We have also investigated the phase diagram of the RP^(2) model extended by a second-neighbor interaction. A rich phase diagram is found, where most of the phase transitions are of the first order.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within Canada there are more than 2.5 million bundles of spent nuclear fuel with another approximately 2 million bundles to be generated in the future. Canada, and every country around the world that has taken a decision on management of spent nuclear fuel, has decided on long-term containment and isolation of the fuel within a deep geological repository. At depth, a deep geological repository consists of a network of placement rooms where the bundles will be located within a multi-layered system that incorporates engineered and natural barriers. The barriers will be placed in a complex thermal-hydraulic-mechanical-chemical-biological (THMCB) environment. A large database of material properties for all components in the repository are required to construct representative models. Within the repository, the sealing materials will experience elevated temperatures due to the thermal gradient produced by radioactive decay heat from the waste inside the container. Furthermore, high porewater pressure due to the depth of repository along with possibility of elevated salinity of groundwater would cause the bentonite-based materials to be under transient hydraulic conditions. Therefore it is crucial to characterize the sealing materials over a wide range of thermal-hydraulic conditions. A comprehensive experimental program has been conducted to measure properties (mainly focused on thermal properties) of all sealing materials involved in Mark II concept at plausible thermal-hydraulic conditions. The thermal response of Canada’s concept for a deep geological repository has been modelled using experimentally measured thermal properties. Plausible scenarios are defined and the effects of these scenarios are examined on the container surface temperature as well as the surrounding geosphere to assess whether they meet design criteria for the cases studied. The thermal response shows that if all the materials even being at dried condition, repository still performs acceptably as long as sealing materials remain in contact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plant performance is significantly influenced by prevailing light and temperature conditions during plant growth and development. For plants exposed to natural fluctuations in abiotic environmental conditions it is however laborious and cumbersome to experimentally assign any contribution of individual environmental factors to plant responses. This study aimed at analyzing the interplay between light, temperature and internode growth based on model approaches. We extended the light-sensitive virtual plant model L-Cucumber by implementing a common Arrhenius function for appearance rates, growth rates, and growth durations. For two greenhouse experiments, the temperature-sensitive model approach resulted in a precise prediction of cucumber mean internode lengths and number of internodes, as well as in accurately predicted patterns of individual internode lengths along the main stem. In addition, a system's analysis revealed that environmental data averaged over the experimental period were not necessarily related to internode performance. Finally, the need for a species-specific parameterization of the temperature response function and related aspects in modeling temperature effects on plant development and growth is discussed.