939 resultados para monitoring process mean and variance
Resumo:
A vespa-da-madeira, Sirex noctilio Fabricius (Hymenoptera: Siricidae) foi introduzida no Brasil em 1988 e tornou-se a principal praga dos plantios de pínus. Encontra-se distribuída em aproximadamente 1.000.000 de ha em diferentes níveis populacionais nos Estados do Rio Grande do Sul, Santa Catarina, Paraná, São Paulo e Minas Gerais. O controle da população da vespa-da-madeira é feito principalmente pela utilização do nematoide Deladenus siricidicola Bedding (Nematoda: Neothylenchidae). A avaliação da eficiência dos inimigos naturais é dificultada por não haver um sistema de amostragem apropriado. Este estudo testou o sistema de amostragem hierárquica para definir o tamanho da amostra para monitorar a população de S. noctilio e também a eficiência dos inimigos naturais, a qual mostrou-se adequada.
Resumo:
In the most recent years, Additive Manufacturing (AM) has drawn the attention of both academic research and industry, as it might deeply change and improve several industrial sectors. From the material point of view, AM results in a peculiar microstructure that strictly depends on the conditions of the additive process and directly affects mechanical properties. The present PhD research project aimed at investigating the process-microstructure-properties relationship of additively manufactured metal components. Two technologies belonging to the AM family were considered: Laser-based Powder Bed Fusion (LPBF) and Wire-and-Arc Additive Manufacturing (WAAM). The experimental activity was carried out on different metals of industrial interest: a CoCrMo biomedical alloy and an AlSi7Mg0.6 alloy processed by LPBF, an AlMg4.5Mn alloy and an AISI 304L austenitic stainless steel processed by WAAM. In case of LPBF, great attention was paid to the influence that feedstock material and process parameters exert on hardness, morphological and microstructural features of the produced samples. The analyses, targeted at minimizing microstructural defects, lead to process optimization. For heat-treatable LPBF alloys, innovative post-process heat treatments, tailored on the peculiar hierarchical microstructure induced by LPBF, were developed and deeply investigated. Main mechanical properties of as-built and heat-treated alloys were assessed and they were well-correlated to the specific LPBF microstructure. Results showed that, if properly optimized, samples exhibit a good trade-off between strength and ductility yet in the as-built condition. However, tailored heat treatments succeeded in improving the overall performance of the LPBF alloys. Characterization of WAAM alloys, instead, evidenced the microstructural and mechanical anisotropy typical of AM metals. Experiments revealed also an outstanding anisotropy in the elastic modulus of the austenitic stainless-steel that, along with other mechanical properties, was explained on the basis of microstructural analyses.
Resumo:
The following thesis focused on the dry grinding process modelling and optimization for automotive gears production. A FEM model was implemented with the aim at predicting process temperatures and preventing grinding thermal defects on the material surface. In particular, the model was conceived to facilitate the choice of the grinding parameters during the design and the execution of the dry-hard finishing process developed and patented by the company Samputensili Machine Tools (EMAG Group) on automotive gears. The proposed model allows to analyse the influence of the technological parameters, comprising the grinding wheel specifications. Automotive gears finished by dry-hard finishing process are supposed to reach the same quality target of the gears finished through the conventional wet grinding process with the advantage of reducing production costs and environmental pollution. But, the grinding process allows very high values of specific pressure and heat absorbed by the material, therefore, removing the lubricant increases the risk of thermal defects occurrence. An incorrect design of the process parameters set could cause grinding burns, which affect the mechanical performance of the ground component inevitably. Therefore, a modelling phase of the process could allow to enhance the mechanical characteristics of the components and avoid waste during production. A hierarchical FEM model was implemented to predict dry grinding temperatures and was represented by the interconnection of a microscopic and a macroscopic approach. A microscopic single grain grinding model was linked to a macroscopic thermal model to predict the dry grinding process temperatures and so to forecast the thermal cycle effect caused by the process parameters and the grinding wheel specification choice. Good agreement between the model and the experiments was achieved making the dry-hard finishing an efficient and reliable technology to implement in the gears automotive industry.
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.
Resumo:
The increasing number of extreme rainfall events, combined with the high population density and the imperviousness of the land surface, makes urban areas particularly vulnerable to pluvial flooding. In order to design and manage cities to be able to deal with this issue, the reconstruction of weather phenomena is essential. Among the most interesting data sources which show great potential are the observational networks of private sensors managed by citizens (crowdsourcing). The number of these personal weather stations is consistently increasing, and the spatial distribution roughly follows population density. Precisely for this reason, they perfectly suit this detailed study on the modelling of pluvial flood in urban environments. The uncertainty associated with these measurements of precipitation is still a matter of research. In order to characterise the accuracy and precision of the crowdsourced data, we carried out exploratory data analyses. A comparison between Netatmo hourly precipitation amounts and observations of the same quantity from weather stations managed by national weather services is presented. The crowdsourced stations have very good skills in rain detection but tend to underestimate the reference value. In detail, the accuracy and precision of crowd- sourced data change as precipitation increases, improving the spread going to the extreme values. Then, the ability of this kind of observation to improve the prediction of pluvial flooding is tested. To this aim, the simplified raster-based inundation model incorporated in the Saferplaces web platform is used for simulating pluvial flooding. Different precipitation fields have been produced and tested as input in the model. Two different case studies are analysed over the most densely populated Norwegian city: Oslo. The crowdsourced weather station observations, bias-corrected (i.e. increased by 25%), showed very good skills in detecting flooded areas.
Resumo:
Marine litter and plastics are a significant and growing marine contaminant that has become a global problem. Macrolitter is subject to fragmentation and degradation due to physical, chemical and biological processes, leading to the formation of micro-litter, the so-called microplastics. The purpose of this research is to assess marine litter pollution by using remote sensing tools to identify areas of macrolitter accumulation and to evaluate the concentrations of microplastics in different environmental matrices: water, sediment and biota (i.e. mussels and fish) and to contribute to the European project MAELSTROM (Smart technology for MArinE Litter SusTainable RemOval and Management). The aim is to monitor the presence of macro- and microlitter at two sites of the Venice coastal area: an abandoned mussel farm at sea and a lagoon site near the artificial Island of Sacca Fisola; The results showed that both study areas are characterised by high amounts of marine litter, but the type of observed litter is different. In fact, in the mussel farm area, most of the litter is linked to aquaculture activities (ropes, nets, mooring blocks and floating buoys). In the Venice lagoon site, the litter comes more from urban activities and from the city of Venice (car tyres, crates, wrecks, etc.). Microplastics is present in both sites and in all the analysed matrices. Generally, higher microplastics concentrations were found at Sacca Fisola (i.e., in surface waters, mussels and fish). Moreover, some differences were also observed in shapes and colours comparing the two sites. At Sacca Fisola, white irregular fragments predominate in water samples, blue filaments in sediment and mussels, and transparent irregular fragments in fish. At the Mussel Farm, blue filaments predominate in water, sediment and mussels, while flat black fragments predominate in fish. These differences are related to the different types of macrolitter that characterised the two areas.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.
Resumo:
The estimation of non available soil variables through the knowledge of other related measured variables can be achieved through pedotransfer functions (PTF) mainly saving time and reducing cost. Great differences among soils, however, can yield non desirable results when applying this method. This study discusses the application of developed PTFs by several authors using a variety of soils of different characteristics, to evaluate soil water contents of two Brazilian lowland soils. Comparisons are made between PTF evaluated data and field measured data, using statistical and geostatistical tools, like mean error, root mean square error, semivariogram, cross-validation, and regression coefficient. The eight tested PTFs to evaluate gravimetric soil water contents (Ug) at the tensions of 33 kPa and 1,500 kPa presented a tendency to overestimate Ug 33 kPa and underestimate Ug1,500 kPa. The PTFs were ranked according to their performance and also with respect to their potential in describing the structure of the spatial variability of the set of measured values. Although none of the PTFs have changed the distribution pattern of the data, all resulted in mean and variance statistically different from those observed for all measured values. The PTFs that presented the best predictive values of Ug33 kPa and Ug1,500 kPa were not the same that had the best performance to reproduce the structure of spatial variability of these variables.
Resumo:
Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
Les logiciels utilisés sont Splus et R.
Resumo:
Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.
Resumo:
In this paper, a novel fast method for modeling mammograms by deterministic fractal coding approach to detect the presence of microcalcifications, which are early signs of breast cancer, is presented. The modeled mammogram obtained using fractal encoding method is visually similar to the original image containing microcalcifications, and therefore, when it is taken out from the original mammogram, the presence of microcalcifications can be enhanced. The limitation of fractal image modeling is the tremendous time required for encoding. In the present work, instead of searching for a matching domain in the entire domain pool of the image, three methods based on mean and variance, dynamic range of the image blocks, and mass center features are used. This reduced the encoding time by a factor of 3, 89, and 13, respectively, in the three methods with respect to the conventional fractal image coding method with quad tree partitioning. The mammograms obtained from The Mammographic Image Analysis Society database (ground truth available) gave a total detection score of 87.6%, 87.6%, 90.5%, and 87.6%, for the conventional and the proposed three methods, respectively.