949 resultados para Statistical model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950 - December 2015) on a 0.5° x 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first collect an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 11) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2014) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Global niobium production is presently dominated by three operations, Araxá and Catalão (Brazil), and Niobec (Canada). Although Brazil accounts for over 90% of the world’s niobium production, a number of high grade niobium deposits exist worldwide. The advancement of these deposits depends largely on the development of operable beneficiation flowsheets. Pyrochlore, as the primary niobium mineral, is typically upgraded by flotation with amine collectors at acidic pH following a complicated flowsheet with significant losses of niobium. This research compares the typical two stage flotation flowsheet to a direct flotation process (i.e. elimination of gangue pre-flotation) with the objective of circuit simplification. In addition, the use of a chelating reagent (benzohydroxamic acid, BHA) was studied as an alternative collector for fine grained, highly disseminated pyrochlore. For the amine based reagent system, results showed that while comparable at the laboratory scale, when scaled up to the pilot level the direct flotation process suffered from circuit instability because of high quantities of dissolved calcium in the process water due to stream recirculation and fine calcite dissolution, which ultimately depressed pyrochlore. This scale up issue was not observed in pilot plant operation of the two stage flotation process as a portion of the highly reactive carbonate minerals was removed prior to acid addition. A statistical model was developed for batch flotation using BHA on carbonatite ore (0.25% Nb2O5) that could not be effectively upgraded using the conventional amine reagent scheme. Results showed that it was possible to produce a concentrate containing 1.54% Nb2O5 with 93% Nb recovery in ~15% of the original mass. Fundamental studies undertaken included FT-IR and XPS, which showed the adsorption of both the protonized amine and the neutral amine onto the surface of the pyrochlore (possibly at niobium sites as indicated by detected shifts in the Nb3d binding energy). The results suggest that the preferential flotation of pyrochlore over quartz with amines at low pH levels can be attributed to a difference in critical hemimicelle concentration (CHC) values for the two minerals. BHA was found to be absorbed on pyrochlore surfaces by a similar mechanism to alkyl hydroxamic acid. It is hoped that this work will assist in improving operability of existing pyrochlore flotation circuits and help promote the development of niobium deposits globally. Future studies should focus on investigation into specific gangue mineral depressants and inadvertent activation phenomenon related to BHA flotation of gangue minerals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El artículo analiza los cambios político electorales en León, Guanajuato, a partir de cómo se fue configurando el desplazamiento del Partido Revolucionario Institucional (PRI) por el Partido Acción Nacional (PAN) en este ayuntamiento en el año 1988, hasta el cambio de correlación de fuerzas en el año 2012. Ello da pauta para analizar los escenarios que podrían caracterizar las próximas elecciones de este año. Con este objetivo se propone un modelo estadístico para dicho estudio: el modelo de regresión Dirichlet, el cual permite considerar la naturaleza de los datos electorales.
The article analyzes the electoral changes in León, Guanajuato, based on how it was setting the displacement of the Institutional Revolutionary Party (PRI) by the National Action Party (PAN) in this council in 1988, until the change of correlation forces in 2012, which gives guidelines to analyze the scenarios that could characterize the upcoming elections this year. With this aim the authors proposed a statistical model for the study: the Dirichlet regression model, which allows to consider the nature of electoral data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans ce projet de recherche, le dépôt des couches minces de carbone amorphe (généralement connu sous le nom de DLC pour Diamond-Like Carbon en anglais) par un procédé de dépôt chimique en phase vapeur assisté par plasma (ou PECVD pour Plasma Enhanced Chemical Vapor deposition en anglais) a été étudié en utilisant la Spectroscopie d’Émission Optique (OES) et l’analyse partielle par régression des moindres carrés (PLSR). L’objectif de ce mémoire est d’établir un modèle statistique pour prévoir les propriétés des revêtements DLC selon les paramètres du procédé de déposition ou selon les données acquises par OES. Deux séries d’analyse PLSR ont été réalisées. La première examine la corrélation entre les paramètres du procédé et les caractéristiques du plasma pour obtenir une meilleure compréhension du processus de dépôt. La deuxième série montre le potentiel de la technique d’OES comme outil de surveillance du procédé et de prédiction des propriétés de la couche déposée. Les résultats montrent que la prédiction des propriétés des revêtements DLC qui était possible jusqu’à maintenant en se basant sur les paramètres du procédé (la pression, la puissance, et le mode du plasma), serait envisageable désormais grâce aux informations obtenues par OES du plasma (particulièrement les indices qui sont reliées aux concentrations des espèces dans le plasma). En effet, les données obtenues par OES peuvent être utilisées pour surveiller directement le processus de dépôt plutôt que faire une étude complète de l’effet des paramètres du processus, ceux-ci étant strictement reliés au réacteur plasma et étant variables d’un laboratoire à l’autre. La perspective de l’application d’un modèle PLSR intégrant les données de l’OES est aussi démontrée dans cette recherche afin d’élaborer et surveiller un dépôt avec une structure graduelle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Estuaries are areas which, from their structure, their fonctioning, and their localisation, are subject to significant contribution of nutrients. One of the objectif of the RNO, the French network for coastal water quality monitoring, is to assess the levels and trends of nutrient concentrations in estuaries. A linear model was used in order to describe and to explain the total dissolved nitrogen concentration evolution in the three most important estuaries on the Chanel-Atlantic front (Seine, Loire and Gironde). As a first step, the selection of a reliable data set was performed. Then total dissolved nitrogen evolution schemes in estuary environment were graphically studied, and allowed a resonable choice of covariables. The salinity played a major role in explaining nitrogen concentration variability in estuary, and dilution lines were proved to be a useful tool to detect outlying observations and to model the nitrogenlsalinity relation. Increasing trends were detected by the model, with a high magnitude in Seine, intermediate in Loire, and lower in Gironde. The non linear trend estimated in Loire and Seine estuaries could be due to important interannual variations as suggest in graphics. In the objective of the QUADRIGE database valorisation, a discussion on the statistical model, and on the RNO hydrological data sampling strategy, allowed to formulate suggestions towards a better exploitation of nutrient data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, wind wave prediction and analysis in the Southern Caspian Sea are surveyed. Because of very much importance and application of this matter in reducing vital and financial damages or marine activities, such as monitoring marine pollution, designing marine structure, shipping, fishing, offshore industry, tourism and etc, gave attention by some marine activities. In this study are used the Caspian Sea topography data that are extracted from the Caspian Sea Hydrography map of Iran Armed Forces Geographical Organization and the I 0 meter wind field data that are extracted from the transmitted GTS synoptic data of regional centers to Forecasting Center of Iran Meteorological Organization for wave prediction and is used the 20012 wave are recorded by the oil company's buoy that was located at distance 28 Kilometers from Neka shore for wave analysis. The results of this research are as follows: - Because of disagreement between the prediction results of SMB method in the Caspian sea and wave data of the Anzali and Neka buoys. The SMB method isn't able to Predict wave characteristics in the Southern Caspian Sea. - Because of good relativity agreement between the WAM model output in the Caspian Sea and wave data of the Anzali buoy. The WAM model is able to predict wave characteristics in the southern Caspian Sea with high relativity accuracy. The extreme wave height distribution function for fitting to the Southern Caspian Sea wave data is obtained by determining free parameters of Poisson-Gumbel function through moment method. These parameters are as below: A=2.41, B=0.33. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by above function with the wave data of Neka buoy is about %35. The 100-year return value of the Southern Caspian Sea significant height wave is about 4.97 meter. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by statistical model of peak over threshold with the wave data of Neka buoy is about %2.28. The parametric relation for fitting to the Southern Caspian Sea frequency spectra is obtained by determining free parameters of the Strekalov, Massel and Krylov etal_ multipeak spectra through mathematical method. These parameters are as below: A = 2.9 B=26.26, C=0.0016 m=0.19 and n=3.69. The maximum relative error between calculated free parameters of the Southern Caspian Sea multipeak spectrum with the proposed free parameters of double-peaked spectrum by Massel and Strekalov on the experimental data from the Caspian Sea is about 36.1 % in spectrum energetic part and is about 74M% in spectrum high frequency part. The peak over threshold waverose of the Southern Caspian Sea shows that maximum occurrence probability of wave height is relevant to waves with 2-2.5 meters wave fhe error sources in the statistical analysis are mainly due to: l) the missing wave data in 2 years duration through battery discharge of Neka buoy. 2) the deportation %15 of significant height annual mean in single year than long period average value that is caused by lack of adequate measurement on oceanic waves, and the error sources in the spectral analysis are mainly due to above- mentioned items and low accurate of the proposed free parameters of double-peaked spectrum on the experimental data from the Caspian Sea.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spent hydroprocessing catalysts (HPCs) are solid wastes generated in refinery industries and typically contain various hazardous metals, such as Co, Ni, and Mo. These wastes cannot be discharged into the environment due to strict regulations and require proper treatment to remove the hazardous substances. Various options have been proposed and developed for spent catalysts treatment; however, hydrometallurgical processes are considered efficient, cost-effective and environmentally-friendly methods of metal extraction, and have been widely employed for different metal uptake from aqueous leachates of secondary materials. Although there are a large number of studies on hazardous metal extraction from aqueous solutions of various spent catalysts, little information is available on Co, Ni, and Mo removal from spent NiMo hydroprocessing catalysts. In the current study, a solvent extraction process was applied to the spent HPC to specifically remove Co, Ni, and Mo. The spent HPC is dissolved in an acid solution and then the metals are extracted using three different extractants, two of which were aminebased and one which was a quaternary ammonium salt. The main aim of this study was to develop a hydrometallurgical method to remove, and ultimately be able to recover, Co, Ni, and Mo from the spent HPCs produced at the petrochemical plant in Come By Chance, Newfoundland and Labrador. The specific objectives of the study were: (1) characterization of the spent catalyst and the acidic leachate, (2) identifying the most efficient leaching agent to dissolve the metals from the spent catalyst; (3) development of a solvent extraction procedure using the amine-based extractants Alamine308, Alamine336 and the quaternary ammonium salt, Aliquat336 in toluene to remove Co, Ni, and Mo from the spent catalyst; (4) selection of the best reagent for Co, Ni, and Mo extraction based on the required contact time, required extractant concentration, as well as organic:aqueous ratio; and (5) evaluation of the extraction conditions and optimization of the metal extraction process using the Design Expert® software. For the present study, a Central Composite Design (CCD) method was applied as the main method to design the experiments, evaluate the effect of each parameter, provide a statistical model, and optimize the extraction process. Three parameters were considered as the most significant factors affecting the process efficiency: (i) extractant concentration, (ii) the organic:aqueous ratio, and (iii) contact time. Metal extraction efficiencies were calculated based on ICP analysis of the pre- and post–leachates, and the process optimization was conducted with the aid of the Design Expert® software. The obtained results showed that Alamine308 can be considered to be the most effective and suitable extractant for spent HPC examined in the study. Alamine308 is capable of removing all three metals to the maximum amounts. Aliquat336 was found to be not as effective, especially for Ni extraction; however, it is able to separate all of these metals within the first 10 min, unlike Alamine336, which required more than 35 min to do so. Based on the results of this study, a cost-effective and environmentally-friendly solventextraction process was achieved to remove Co, Ni, and Mo from the spent HPCs in a short amount of time and with the low extractant concentration required. This method can be tested and implemented for other hazardous metals from other secondary materials as well. Further investigation may be required; however, the results of this study can be a guide for future research on similar metal extraction processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O sector avícola enfrenta atualmente dois desafios muito estimulantes. O primeiro decorre do aumento, que se prevê continuar a crescer, nos níveis de procura de carne de aves no mercado interno e internacional; o segundo decorre do facto da criação avícola ter adotado métodos de produção mais intensivos (kg peso vivo/m2/ano) e em maior escala, i.e. com maior concentração animal na mesma exploração. Este carácter vincadamente “industrial” tem merecido uma natural atenção das sociedades e das autoridades pecuárias no sentido desta economia de escala passar a ter num conjunto de instrumentos legais e técnicos o devido contrapeso para a salvaguarda das aves enquanto ser vivo. O presente trabalho tem como ponto de partida a Directiva 2007/43/CE do Conselho de 28 de Junho, relativa ao estabelecimento de regras mínimas para a proteção de frangos de carne. Em virtude de não existir ainda informação suficiente sobre a forma como a qualidade do maneio animal pode ser monitorizada, ao nível do abate, por médicos veterinários e auxiliares oficiais, em frangos de criação especial segundo os modelos definidos no Regulamento (CE) n.º 543/2008, urge realizar estudos neste domínio. O principal objetivo da realização do presente trabalho de campo foi o estudo da ocorrência das dermatites de contacto plantar (pododermatites) e da bolsa sinovial préesternal em frangos produzidos em sistemas de produção considerados “protetores” do bem-estar animal, designadamente os seguintes: i) ar livre; e, ii) extensivo de interior. O estudo foi efetuado num centro de abate de frangos do campo, em Oliveira de Frades, entre Maio de 20012 e Março de 2013. Os animais abatidos foram criados em explorações com contratos de integração situadas no Distrito de Viseu. Os dados foram recolhidos em 39 bandos diferentes da espécie Gallus domesticus, dos quais 1021 carcaças foram avaliadas após evisceração, o que correspondeu ao exame de uma a cada quinze aves da linha de abate. Para a avaliação da pododermatite foi utilizado o método adaptado pela DGAV, enquanto para a avaliação da bursite esternal foi efetuada tendo em conta o modelo aplicado em perus por Berk em 2002. Apesar do modelo estatístico desenvolvido para a análise dos resultados obtidos no presente trabalho exigir um maior número de observações, foi possível identificar com grande precisão alguns fatores de risco que devem ser realçados pela sua relevância no contexto dos sistemas produtivos escrutinados ou no mecanismo fisiopatológico da dermatite de contacto, nomeadamente os seguintes: (i) a idade das aves que, apesar de não ter sido identificada uma relação directa com os scores de pododermatite e bursite, verificou-se que a idade elevada que os animais tipicamente atingem nos sistemas de produção extensivos está associada a uma taxa superior de rejeições pela inspecção sanitária; (ii) o peso pré-abate que, independentemente da inconsistência defendida por diversos autores em relação à influência do peso vivo do frango industrial sobre a dermatite de contacto, nos animais produzidos em regime extensivo, esta variável pode desempenhar um fator chave para a ocorrência desta lesão. De facto, há que realçar que o peso destes animais tem uma importância fulcral na modelação da biomecânica da ave, incluindo na pressão exercida sobre a superfície plantar; (iii) o tipo de sistema de abeberamento, tendo ficado demonstrado que a selecção do tipo de bebedouro tem uma importância peculiar sobre a ocorrência de pododermatite em “frango de campo”, algo que está provavelmente relacionado com a influência exercida sobre o teor de humidade da cama. Globalmente, as frequências de pododermatite e bursite apuradas neste trabalho devem ser consideradas inquietantes. Esta preocupação eleva-se quando se toma consciência que as aves provieram de regimes considerados “amigáveis” e “sustentáveis”, pelo que urge monitorizar adequadamente aqueles sistemas produtivos, melhorar as suas condições e reanalisar os benefícios ao nível do bem-estar animal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O prognóstico da perda dentária é um dos principais problemas na prática clínica de medicina dentária. Um dos principais fatores prognósticos é a quantidade de suporte ósseo do dente, definido pela área da superfície radicular dentária intraóssea. A estimação desta grandeza tem sido realizada por diferentes metodologias de investigação com resultados heterogéneos. Neste trabalho utilizamos o método da planimetria com microtomografia para calcular a área da superfície radicular (ASR) de uma amostra de cinco dentes segundos pré-molares inferiores obtida da população portuguesa, com o objetivo final de criar um modelo estatístico para estimar a área de superfície radicular intraóssea a partir de indicadores clínicos da perda óssea. Por fim propomos um método para aplicar os resultados na prática. Os dados referentes à área da superfície radicular, comprimento total do dente (CT) e dimensão mésio-distal máxima da coroa (MDeq) serviram para estabelecer as relações estatísticas entre variáveis e definir uma distribuição normal multivariada. Por fim foi criada uma amostra de 37 observações simuladas a partir da distribuição normal multivariada definida e estatisticamente idênticas aos dados da amostra de cinco dentes. Foram ajustados cinco modelos lineares generalizados aos dados simulados. O modelo estatístico foi selecionado segundo os critérios de ajustamento, preditibilidade, potência estatística, acurácia dos parâmetros e da perda de informação, e validado pela análise gráfica de resíduos. Apoiados nos resultados propomos um método em três fases para estimação área de superfície radicular perdida/remanescente. Na primeira fase usamos o modelo estatístico para estimar a área de superfície radicular, na segunda estimamos a proporção (decis) de raiz intraóssea usando uma régua de Schei adaptada e na terceira multiplicamos o valor obtido na primeira fase por um coeficiente que representa a proporção de raiz perdida (ASRp) ou da raiz remanescente (ASRr) para o decil estimado na segunda fase. O ponto forte deste estudo foi a aplicação de metodologia estatística validada para operacionalizar dados clínicos na estimação de suporte ósseo perdido. Como pontos fracos consideramos a aplicação destes resultados apenas aos segundos pré-molares mandibulares e a falta de validação clínica.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Economia e Gestão de Ciência, Tecnologia e Inovação

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aiming to obtain empirical models for the estimation of Syrah leaf area a set of 210 fruiting shoots was randomly collected during the 2013 growing season in an adult experimental vineyard, located in Lisbon, Portugal. Samples of 30 fruiting shoots were taken periodically from the stage of inflorescences visible to veraison (7 sampling dates). At the lab, from each shoot, primary and lateral leaves were separated and numbered according to node insertion. For each leaf, the length of the central and lateral veins was recorded and then the leaf area was measured by a leaf area meter. For single leaf area estimation the best statistical models uses as explanatory variable the sum of the lengths of the two lateral leaf veins. For the estimation of leaf area per shoot it was followed the approach of Lopes & Pinto (2005), based on 3 explanatory variables: number of primary leaves and area of the largest and smallest leaves. The best statistical model for estimation of primary leaf area per shoot uses a calculated variable obtained from the average of the largest and smallest primary leaf area multiplied by the number of primary leaves. For lateral leaf area estimation another model using the same type of calculated variable is also presented. All models explain a very high proportion of variability in leaf area. Our results confirm the already reported strong importance of the three measured variables (number of leaves and area of the largest and smallest leaf) as predictors of the shoot leaf area. The proposed models can be used to accurately predict Syrah primary and secondary leaf area per shoot in any phase of the growing cycle. They are inexpensive, practical, non-destructive methods which do not require specialized staff or expensive equipment.